Press & News

Dallas Technology Integrator GDT Names Eric Power Vice President of Sales, Central United States

Dallas, TX – Dallas-based technology and systems integrator GDT announced today that Eric Power has been named Vice President of Sales, Central United States, effectively immediately. In his new role, Power will be focused on expanding GDT’s customer base from the southern tip of Texas to the central region’s northernmost markets.

Power joins GDT after almost twenty (20) successful years at Cisco as both the top performing account manager in US Commercial sales and the eight-year leader of Cisco’s top performing Mid-Market sales team. In addition, he was requested to conduct front-line leadership training for Cisco’s US Commercial Segment in each of his last five (5) years, and to expand the program globally via training videos.

“We’re very fortunate and excited to welcome Eric to the GDT family,” said GDT President Vinod Muthuswamy. “His proven 100 +10 approach to sales leadership―giving one hundred percent (100%) effort and spending at least ten percent (10%) of their time helping others―fits perfectly with GDT’s corporate culture and customer-first focus.”

The Dallas Business Journal named Power to its prestigious “Top 40 Under 40 Dallas Executives” list due to both his professional success and considerable efforts outside the office. Power has coached a combined fifty (50) seasons of youth sports and has been a Boy Scouts of America leader for over ten (10) years. In addition, he has served as President and spokesperson for Strikes Against Cancer, a non-profit organization that produces baseball tournaments throughout North Texas that donates money for each strike thrown to help families fighting cancer and to fund cancer research.

Power has been married for over twenty (20) years to his wife Aleisha, and they have two (2) sons, Ethan and Coleton. Power holds a Bachelor of Science (B.S.) degree from the University of North Texas in Denton, Texas.

About GDT

Founded in 1996, GDT is an award-winning, international multi-vendor IT solutions provider and Cisco Gold Partner. GDT specializes in the consulting, design, deployment and management of advanced technology solutions for businesses, service providers, government agencies and healthcare organizations. The GDT team of expert solutions architects and engineers maintain the highest certification levels in the industry that help them translate the latest ideas and technologies into innovative solutions that realize the visions of business leaders and help them achieve their digital transformation goals.

GDT achieves highest sales of Cisco products and services in its 20+ year history

Dallas, TX – GDT, a leading IT integrator and data center solutions provider, announced today that it achieved record sales of Cisco products and services for Cisco’s fiscal year 2018, which ended on July 31st. Cisco was the very first partner of GDT, which was started in 1996 by founder and owner J.W. Roberts.

“Our long-term partnership with Cisco is one of the key components that has helped build GDT into the company it is today,” said Roberts. “These record revenue numbers are testament to our strong Cisco relationship, our unwavering belief in their superior products and services, and our ongoing commitment to deliver best-of-breed solutions to GDT customers.”

GDT’s YTD 2018 growth has been due in part to tremendous sales increases in several key areas, including service provider, software, collaboration, enterprise networking and security. At a time when the IT industry is experiencing overall growth of less than 5 percent, GDT’s double-digit growth of Cisco products and software speaks volumes to its commitment to help customers achieve their digital transformation goals.

About GDT

Headquartered in Dallas, TX and with approximately 700 employees, GDT is a global IT integrator and solutions provider approaching $1 Billion in annual revenue. GDT aligns itself with industry leaders, providing the design, build, delivery and management of IT solutions and services. GDT specializes in the consulting, designing, deploying, and managing of advanced technology solutions for businesses, service providers, government, and healthcare. The GDT team of expert architects and engineers maintain the highest level of certifications to translate the latest ideas and technologies into innovative solutions that realize the vision of business leaders.


GDT achieves Advanced-Level AWS Partner Network (APN) status

Dallas, TX – GDT, a leading IT integrator and data center solutions provider, announced today that it achieved Advanced Level status within the elite AWS (Amazon Web Services) Partner Network (APN), and has also been awarded entry into the AWS Public Sector Partner Program. Advancement within APN is based on revenue generation, commitment to training, and the number and quality of customer engagements.

“Our partnership with AWS has been a very rewarding experience for GDT on a number of levels,” said Vinod Muthuswamy, GDT President. “Our ongoing commitment leading enterprise and public-sector customers on their digital transformation journey has been greatly enhanced by our close partnership with AWS. We are eagerly anticipating continued success in the future.”

The APN Consulting Partners Program is reserved for professional services firms that help customers design, build, migrate and manage their applications and workloads on AWS. APN Consulting Partners include Network System Integrators, Managed Service Providers (MSPs) and Value-Added Resellers (VARs), and are provided access to a range of resources that ultimately help their customers better deploy, run and manage applications in the AWS Cloud.

About GDT

Headquartered in Dallas, TX and with approximately 700 employees, GDT is a global IT integrator and solutions provider approaching $1 Billion in annual revenue. GDT aligns itself with industry leaders, providing the design, build, delivery and management of IT solutions and services. GDT specializes in the consulting, designing, deploying, and managing of advanced technology solutions for businesses, service providers, government, and healthcare. The GDT team of expert architects and engineers maintain the highest level of certifications to translate the latest ideas and technologies into innovative solutions that realize the vision of business leaders.

Dallas Technology Integrator GDT Names Troy Steele Director of Staffing Services

Dallas, TX – Dallas-based technology and systems integrator GDT today announced that Troy Steele has been named Director of Staffing Services, effectively immediately. In his new role, Steele will oversee and direct GDT’s staff augmentation practice, which has a 20-year track record helping customers improve operational efficiencies, reduce costs and drive key initiatives through the placement of IT professionals with the right skillsets.

Steele has spent the past twelve (12) years in the staffing industry, and has a proven track record building highly profitable staffing organization by understanding clients’ specific needs, corporate philosophies and organizational nuances.

“We’re excited to welcome Troy to GDT,” said Meg Gordon, GDT’s Vice President of Service Operations. “His experience and expertise building successful staffing organizations will greatly enhance our focus on growing GDT’s staff augmentation practice by continuing to provide the perfect candidates to fill customers’ IT staffing needs and requirements.”

Prior to joining GDT, Steele held several executive staffing positions, most recently with Beacon Hill Staffing, where he spent eight (8) years leading technical recruiting teams throughout Texas. Steele has a Bachelor of Arts in Communications from Southern Illinois University in Edwardsville, Illinois.

About GDT

Founded in 1996, GDT is an award-winning, international multi-vendor IT solutions provider and maintains high-level partner status with several of the world’s leading IT solutions and hardware providers, including HPE, Cisco and Dell EMC. GDT specializes in the consulting, designing, deploying, and managing of advanced technology solutions for businesses, service providers, government, and healthcare. The GDT team of expert architects and engineers maintain the highest level of certifications to translate the latest ideas and technologies into innovative solutions that realize the vision of business leaders.

GDT honored as one of the top technology integrators in CRN’s 2018 Solutions Provider 500 List

Dallas, TX – GDT, a leading IT integrator and data center solutions provider, announced today that CRN®, a brand of The Channel Company, has named GDT as one of the top 50 technology integrators in its 2018 Solution Provider 500 List. The Solution Provider 500 is CRN’s annual ranking by revenue of the largest technology integrators, solution providers and IT consultants in North America.

“GDT is very proud to have earned our high ranking on CRN’s 2018 Solutions Provider 500 List,” said GDT President Vinod Muthuswamy. “It’s humbling to be listed with so many highly touted and respected companies, and our inclusion is further proof of our steadfast commitment to delivering digital transformation solutions for our customers.”

CRN has been providing The Solution Provider 500 List since 1995, and is the predominant channel partner award list in the industry. It highlights those IT channel partner organizations that have earned the most revenue in 2018, and is a valuable resource utilized by vendors looking for top solution providers with which to partner. This year’s list is comprised of companies that have a combined revenue of over $320 billion.

The complete 2018 Solution Provider 500 list is available online at The complete list is published on, and is available to technology vendors seeking out the top solution providers with which to work.

About GDT

Headquartered in Dallas, TX and with approximately 700 employees, GDT is a global IT integrator and solutions provider approaching $1 Billion in annual revenue. GDT aligns itself with industry leaders, providing the design, build, delivery and management of IT solutions and services. GDT specializes in the consulting, designing, deploying, and managing of advanced technology solutions for businesses, service providers, government, and healthcare. The GDT team of expert architects and engineers maintain the highest level of certifications to translate the latest ideas and technologies into innovative solutions that realize the vision of business leaders.


Dallas Technology Integrator GDT Names Adnan Khan Director of Cloud and DevOps

Dallas, TX – Dallas-based technology and systems integrator GDT today announced that Adnan Khan has been named Director of Hybrid Cloud and DevOps, effectively immediately. In his new role, Khan will provide technical leadership to the architecture, design and management of GDT’s software development practice, and expand on its many cloud-related initiatives.

Khan has extensive, hands-on software development leadership experience utilizing lean practices, such as Agile/Scrum. With over 15 years of experience working in high performance distributed practices, Khan is particularly skilled in the following IT technologies: Wireless WAN (CDMA and GSM), Storage Area Networking (SAN), Network Attached Storage (NAS), Android- based applications, Location-based services, Cloud Computing, SaaS, Blockchain, Cryptocurrency and the Internet of Things (IoT) for both consumers and the enterprise market.

“We’re excited to welcome Adnan to GDT’s team of talented, forward-thinking IT engineers and professionals,” said Brad Davenport, GDT Vice President of Solutions Engineering. “We know his tremendous experience, wide-ranging technological expertise and unique skillsets will prove invaluable to GDT.”

Prior to joining GDT, Khan held several senior-level management positions in the IT industry, and has overseen many on- and offshore teams that consistently delivered complex software solutions, from inception to deployment. Many of those solutions are currently being used by millions of customers of some of the most noteworthy wireless carriers in the world.

Khan holds a MBA from the University of California at Irvine’s Paul Merage School of business, and a Master’s Degree in Computer Science from Pakistan’s Karachi University. In addition, Khan holds several IT-related patents.

About GDT

Founded in 1996, GDT is an award-winning, international multi-vendor IT solutions provider and maintains high-level partner status with several of the world’s leading IT solutions and hardware providers, including HPE, Cisco and Dell EMC. GDT specializes in the consulting, designing, deploying, and managing of advanced technology solutions for businesses, service providers, government, and healthcare. The GDT team of expert architects and engineers maintain the highest level of certifications to translate the latest ideas and technologies into innovative solutions that realize the vision of business leaders.

GDT and CloudFabrix to Jointly Offer NextGen IT Transformation Services

Dallas, TX – GDT, a leading IT integrator and data center solutions provider, and CloudFabrix, an AIOps Software vendor, have joined forces to accelerate the IT transformation journey for customers with next generation managed services built on the CloudFabrix cfxDimensions AIOps platform. As a result, GDT will enhance its current managed services offerings, which include cloud, hybrid IT, IoT and customized DevOps solutions. Ideal for VARs and MSPs, the CloudFabrix AIOps platform provides product and services suites for enterprise customers and MSPs, and offers a wide array of foundational capabilities, including any-time, any-source Data Ingestion, Dynamic Asset Discovery, Advanced Analytics, Machine Learning and Blockchain, among others.

The CloudFabrix AlOps platform, which addresses cloud, security and architectural needs, also provides implementation services and enterprise support to VARs and MSPs, all of which greatly reduces partners’ time to value (TtV). Now GDT, when combined with its tremendous engineering skillsets and vast experience providing managed services to customers of all sizes and from a wide range of industries, will be able to further enhance what it’s provided to customers for over 20 years―the delivery of highly innovative IT solutions with a customer-first focus.

“CloudFabrix has already enabled GDT to address many of the architectural and security needs of our customers,” said GDT President Vinod Muthuswamy. “And that, combined with our experience delivering managed services, cloud, hybrid IT, IoT and customized DevOps solutions to customers, will accelerate and improve upon our ability to provide innovative technological solutions that ultimately help customers work on the projects that will help shape their organization’s future.”

Said CloudFabrix Chief Revenue Officer Kishan Bulusu, “We are excited about working closely with GDT, a network integrator that’s made a tremendous name for itself in the managed services, cloud and hybrid IT space. The initiatives we’ve developed at an organic level will not only enhance GDT’s service offerings, but better serve the MSP community at large. Partnering with GDT will also help CloudFabrix enhance our product and platform offerings, and allow us to focus on NextGen technological and architectural capabilities. This will ultimately help CloudFabrix better address and serve the unique needs of our partners’ customers.”

About GDT

Headquartered in Dallas, TX and with approximately 700 employees, GDT is a global IT integrator and solutions provider approaching $1 Billion in annual revenue. GDT aligns itself with industry leaders, providing the design, build, delivery and management of IT solutions and services. GDT specializes in the consulting, designing, deploying, and managing of advanced technology solutions for businesses, service providers, government, and healthcare. The GDT team of expert architects and engineers maintain the highest level of certifications to translate the latest ideas and technologies into innovative solutions that realize the vision of business leaders.

 About CloudFabrix

CloudFabrix enables Responsive & Business aligned IT by making your IT more agile, efficient and analytics driven. CloudFabrix accelerates enterprises to holistically develop, modernize and govern IT processes, applications and operations to meet business outcomes in a consistent and automated manner. CloudFabrix AIops Platform simplifies and unifies IT operations and governance of both traditional and modern applications across multi-cloud environments. CloudFabrix accelerates enterprise’s cloud native journey by providing many built-in foundational services and turnkey operational capabilities. CloudFabrix is headquartered in Pleasanton, CA.

GDT Wins VMware 2017 Regional Partner Innovation Award

Partners Awarded for Extraordinary Performance and Notable Achievements

GDT today announced that it has received the Americas VMware Partner Innovation Award for the Transform Networking & Security category. GDT was recognized at VMware Partner Leadership Summit 2018, held in Scottsdale, AZ.

“We congratulate GDT on winning a VMware Partner Innovation Award for the Transform Networking & Security category, and look forward to our continued collaboration and innovation,” said Frank Rauch, vice president, Americas Partner Organization, VMware. “VMware and our partners will continue to empower organizations of all sizes with technologies that enable digital transformation.”

GDT President Vinod Muthuswamy said, “GDT is honored to have received the Americas VMware Partner Innovation Award in the Networking & Security category. It’s humbling to know our innovation and focus in network and security transformation is being recognized by leaders like VMware. Our close partnership with VMware is greatly enabling our customers to realize their Hybrid IT and digital transformation vision and goals.”

Recipients of an Americas VMware Partner Innovation Award were acknowledged in 14 categories for their outstanding performance and distinctive achievements during 2017.

Americas Partner of the Year Award categories included:

  • Cloud Provider
  • Emerging Markets Distributor
  • Empower the Digital Workspace
  • Integrate Public Clouds
  • Marketing
  • Modernize Data Centers
  • OEM
  • Professional Services
  • Regional Distributor
  • Regional Emerging Markets Partner
  • Solution Provider
  • Transform Networking & Security
  • Transformational Solution Provider
  • Technology

About VMware Partner Leadership Summit 2018

VMware Partner Leadership Summit 2018 offered VMware partners the opportunity to engage with VMware executives and industry peers to explore business opportunities, customer use cases, solution practices, and partnering best practices. As an invitation-only event, it provided partners with resources to develop and execute comprehensive go-to-market plans.  VMware Partner Leadership 2018 Summit concluded with award ceremonies recognizing outstanding achievements in the VMware partner ecosystem.

About GDTHeadquartered in Dallas, TX with approximately 700 employees, GDT is a global IT integrator and solutions provider approaching $1 Billion in annual revenue. GDT aligns itself with industry leaders, providing the design, build, delivery and management of IT solutions and services. GDT specializes in the consulting, designing, deploying, and managing of advanced technology solutions for businesses, service providers, government, and healthcare. The GDT team of expert architects and engineers maintain the highest level of certifications to translate the latest ideas and technologies into innovative solutions that realize the vision of business leaders.

# # #

VMware is a registered trademark of VMware, Inc. in the United States and other jurisdictions.


Dallas Network Integrator GDT’s Spring Fling Bar-B-Que Results in $10,000 Donation to New Horizons of North Texas

Dallas, TX – Dallas-based technology and systems integrator GDT announced at its Annual Spring Fling Bar-B-Que, May 3rd and 4th, that New Horizons of North Texas will receive this year’s $10,000 winner’s donation.

GDT’s Annual Spring Fling Bar-B-Que was started in 2014 by GDT CEO J.W. Roberts to further the company’s fun atmosphere while benefiting local charities. The event pits ten (10) GDT Account Executives competing against each other to determine who can smoke the best brisket and ribs. Each cross-departmental team was comprised of GDT technology partners, including Cisco, HPE, Dell EMC, Pure Networks, VMware, Veeam, Juniper Networks, Hypercore Networks, Cohesity, QTS, APS, Jive Communications and Global Knowledge.

The Spring Fling Bar-B-Que is centered around a 19-hour, highly competitive cooking event, featuring state-of-the-art smokers, secretive, pre-event meetings, and closely guarded recipes. It’s a great event full of food and fun, and provides the perfect environment for camaraderie and relationship building for the over 300 GDT employees in Dallas. And, of course, a winner is awarded who unveils their selected charity to receive the $10,000 donation. GDT Account Executive Chris Bedford, who captained the winning team, selected New Horizons of North Texas.

Said Bedford, a 20-year GDT veteran, “Our annual Spring Fling Bar-B-Que is one of the many marquee―and outrageously fun―events our marketing team produces each year, but being able to donate $10,000 to a great organization like New Horizons of North Texas makes it even more special.”

GDT’s Annual Spring Fling and Bar-B-Que is one of many examples of the company’s work hard, play hard philosophy and its ongoing commitment to giving back to the D/FW community.

About New Horizons of North Texas

New Horizons is a faith-based 501(c)(3) nonprofit dedicated to serving at-risk youth growing up in situations of poverty and academic struggle. The mission of New Horizons of North Texas is to empower at-risk youth to reach their full potential with tutoring, mentoring, and faith-building. Hew Horizons works with a highly relational, individualized, and long-term approach to provide support for elementary students all the way through their high school graduation, while providing over 250 hours of mentorship to each child each year. Visit to learn more about New Horizons.

About GDT

Founded in 1996, GDT is an award-winning, international multi-vendor IT solutions provider and maintains high-level partner status with several of the world’s leading IT solutions and hardware providers, including HPE, Cisco and Dell EMC. GDT specializes in the consulting, designing, deploying, and managing of advanced technology solutions for businesses, service providers, government, and healthcare. The GDT team of expert architects and engineers maintain the highest level of certifications to translate the latest ideas and technologies into innovative solutions that realize the vision of business leaders.

Enough with the Aggie jokes—Texas A&M’s new initiative to combat cyber threats is nothing to laugh about

By Richard Arneson

Some things just don’t make sense, like why when a baseball hits the foul pole it’s a fair ball. Shouldn’t it be called the fair pole? Or why hot dogs come in packs of ten (10) but the buns in quantities of eight (8). Oh, and how about this one—its estimated that within the next three (3) years almost 4 million (4,000,000) cybersecurity jobs will go unfilled due to both a lack of interest and adequate training. It doesn’t seem possible given the amount of cybersecurity events that we hear about every week, what with the ransomware, the Trojans, the viruses, the malware, etc. You’d think cybersecurity would be attracting professionals in droves, but it isn’t. Texas A&M University is doing something about it, though.

While many of the larger corporations have enacted specialized apprenticeship programs in cybersecurity, including mobile training trucks for personnel, the Fightin’ Texas Aggies have taken a far more proactive approach to this issue, and it’s one from which they’re immediately benefiting. To address their cybersecurity labor shortage, they’re pairing students with AI software to protect the schools’ systems from cyber-attacks. In turn, the students get security training and a great, hands-on addition to the resume.

Each month, the Texas A&M University System, which includes eleven (11) universities and seven (7) state agencies, estimates that there are approximately a million attempts to hack into their systems. Prior to implementing this program, IT security was handled by a lean staff that included few full-time employees. Now ten (10) students comprise the majority of their IT security team, and they’re utilizing AI software to detect, monitor and remedy threats. And they’re having no trouble filling these positions. Word has spread throughout campus that this high-visibility program provides insightful skill sets and extremely marketable training.

Nothing beats on-the-job experience

The students’ first order of business each day is to study a whiteboard that outlines areas within the university system that have, or are currently facing, a threat. The threats are compiled through AI, which also prioritizes each. Then it’s up to the students to analyze any abnormalities and determine if it appears suspicious by comparing them to prior attacks.

AI software is key to this initiative, serving as a great springboard for inexperienced cybersecurity students by allowing them to evaluate threats immediately. While AI isn’t acting on the threats—which some consider a risky proposition in the first place–it’s left up to the students to remediate the issues.

So why the lack of professionals in cybersecurity?

Almost fifty percent (50%) of security executives recently surveyed by ISSA (Information Systems Security Association) said that this glaring lack of security professionals is due to two (2) things—high turnover and the high rate of job burnout. And while Texas A&M’s SOC (Security Operations Center) isn’t immune to either, they’re attempting to address these issues by throwing numbers at the issue in the form of many students who are looking for an opportunity to work there. And due to these numbers, students are able to spend time training or working on side projects that can be great additions to their resume. Gig ’em.

Got question? Call on the Security experts

To find out more about cybersecurity and the threats that may soon target your organization, contact GDT’s tenured and talented security analysts at From their Security- and Network Operations Centers, they manage, monitor and protect the networks of some of the most notable enterprises, service providers, healthcare organizations and government agencies in the world. They’d love to hear from you.


Read more about network security here:

Gen V

Sexy, yes, but potentially dangerous

Tetration—do you know its meaning?

It’s in their DNA

Rx for IT departments—a security check-up

When SOC plays second fiddle to NOC, you could be in for an expensive tune

How to protect against Ransomware

Can’t wait for 5G? The FCC has done something to speed up your wait time

By Richard Arneson

Whether you’re a dyed-in-the-wool technophile or just one of those people who has to be the first to have the latest gizmo or gadget, you’re probably eagerly anticipating 5G, which will provide for consumers a host of benefits, including faster speeds, lower latency and a more IoT-friendly wireless infrastructure. But when you hear that 5G won’t be fully deployed for another four (4) years, it kinda’ ruins the mood. Unfortunately, service providers can’t roll out 5G—or any G, for that matter—all at once. Think of the cell towers that need to be upgraded from coast to coast—it’d take almost half a million technicians working simultaneously to accomplish this feat in one fell swoop. Yes, the rollout will begin within the next couple of months, but if you’re not in one (1) of the lucky roll-out areas, you’ll have to wait…and wait…and potentially wait another four (4) years.

…to the rescue

The Federal Communications Commission (FCC) wants to do something about that waiting. And they have. On August 2nd, they voted on rules to speed up rollouts of not just 5G, but new networks, as well. These rules are known as One Touch Make Ready (OTMR), a non-descriptive abbreviation that addresses the strict, cumbersome laws in place that specify the required distance that must separate network elements attached to a pole—usually a telephone poll.

When either a new service provider enters a market, or if an existing one (1) would like to address poor connectivity in an area by adding a site, any equipment or wires already attached must be reconfigured to ensure the required distance is maintained. It’s so painful that many speculate it’s the very reason Google Fiber had to greatly throttle back its once aggressive deployment schedule.

Currently, laws related to cell towers are handled through the jurisdiction in which they reside. Resultant installations are a headache at best, a nightmare at worst, and pole access to new competitors is delegated to “least important” status. Because accommodating new competitors is reliant on the reconfiguration of equipment and wiring by incumbent carriers, the process is, as you probably imagined, not one of their higher priorities.

According to FCC Chairman Ajit Pai: “For a competitive entrant, especially a small company, breaking into the market can be hard, if not impossible, if your business plan relies on other entities to make room for you on those poles. Today, a broadband provider that wants to attach fiber or other equipment to a pole first must wait for, and pay for, each existing attacher [installer] to sequentially move existing equipment and wires. This can take months. And the bill for multiple truck rolls adds up. For companies of any size, pole attachment problems represent one of the biggest barriers to broadband deployment.”

In addition to 5G, the FCC believes this new rule will mean 8.3 million additional premises will be passed with fiber, totaling in excess of $12.6 billion spent on those projects. In addition to faster installations of cell sites, the new rules will greatly enhance the fiber density related to wireless backhaul.

Mobility Experts with answers

If you have questions about your organization’s current mobility strategy (or the one you’d like to implement) and how 5G will affect it, contact GDT’s Mobility Solutions experts at They’re comprised of experienced solutions architects and engineers who have implemented mobility solutions for some of the largest organizations in the world. They’d love to hear from you.

Usually just a minor annoyance, the Flash Player update can now result in a major ordeal

By Richard Arneson

It’s one (1) of the most common speed bumps on the Internet highway—the Adobe Flash Player update message. It’s unexpected and never welcome—a little like a tornado, but not quite that bad. It may not trump some of the other digital speed bumps, like the Windows update you have to sit through after you’ve hit “Shut Down” on your computer (you know, the one that usually occurs at 5:30 on Friday afternoon), but it still serves as one (1) of computing’s many figurative mosquitoes. But while the Flash update has only proven to be a minor annoyance, you can now place it in another category―crippling.

Palo Alto Networks, the Santa Clara, CA-based cybersecurity firm, discovered earlier this month that a fake Flash updater has been loading malware on networks since early August. Here’s the interesting part—it actually installs a legitimate Flash update. But before you think cyber attackers have going soft, they’re downloading Flash for distraction purposes only. And while the update is taking place, another upload is occurring—the installation of a bot named XMRig, which mines a cryptocurrency named Monero. Once the install(s) are complete, the user, unbeknownst to them, begins mining Monero. And there you have it—cryptojacking.

Cryptojacking with XMRig

Once the phony Flash update is launched, the user is directed to a fake URL that, of course, isn’t connected to an Adobe server. After the Flash update is installed, XMRig accesses a Monero mining pool—and the fun begins. XMRig begins mining Monero from infected, networked computers as unknowing users merrily work along, completing their day-to-day tasks. Keep in mind that Monero is a legitimate form of cryptocurrency. Like Bitcoin for ransomware, Monero is the cryptocurrency of choice for cryptojacking. Monero’s website claims it is “the leading cryptocurrency with a focus on private and censorship-resistant transactions.” (Unlike Bitcoin, Monero doesn’t require the recipient to disclose their wallet address to receive payment(s)).

Let’s back up a bit—here’s how crypto mining works

It can be argued that cryptojacking has replaced ransomware as cyberattackers’ malevolent deed of choice. It’s important to remember, though, that cryptocurrency mining is legal—it’s how cryptocurrency works. Mining is the process of finding, then adding transactions to, currencies’ public ledger. The chain of transactions is called the block—hence the name blockchain.

A blockchain’s ledger isn’t housed in one (1) centralized location. Instead, it is simultaneously managed through duplicate databases across a network of computers—millions of them. Encryption controls and protects the creation of new coins and the transfer of funds, without disclosing ownership. The transactions enter circulation through mining, which basically turns computing resources into coins. Anybody can mine cryptocurrency by downloading open-source mining software, which allows their computer to mine, or account for, the currency. Mining solves a mathematical problem associated with each transaction, which verifies that the sender’s account can cover the payment, determines to which wallet the payment should be made, and updates the all-important ledger. The first one to solve the problem gets paid a commission in the particular currency it’s mining.

In cryptocurrency’s nascency, the computing power needed was minimal. Basically, anybody could do it. Now the computing power needed to mine cryptocurrency is considerable, with miners requiring expensive, purpose-built, super powerful computers to do so. If they don’t have that, they can forget making decent miner money. But building enough computing resources needed to profitably mine cryptocurrency today is expensive, often cost prohibitive. In cryptojacking, however, the cyber attackers network together infected computers and utilize their computing power without spending a dime. In turn, the victim’s infected computer is busy surreptitiously mining cryptocurrency and slowing to a crawl. The bad guys enjoy pure net revenue.

Got question? Call on the Security experts

To find out more about cryptojacking, ransomware, malware, Trojans, and the host of security-related issues your organization needs to consider and fend off, contact GDT’s tenured and talented security analysts at From their Security- and Network Operations Centers, they manage, monitor and protect the networks of some of the most notable enterprises, service providers, healthcare organizations and government agencies in the world. They’d love to hear from you

Get more information about network security here:

Gen V

Sexy, yes, but potentially dangerous

Tetration—do you know its meaning?

It’s in their DNA

Rx for IT departments—a security check-up

When SOC plays second fiddle to NOC, you could be in for an expensive tune

How to protect against Ransomware


Hybrid Cloud Conundrums? Consider HPE GreenLake Flex Cap

By Richard Arneson

If you need to purchase a container to hold what you’re estimating is between 48 and 60 ounces of liquid, are you going to buy the 50- or 70- ounce container? Yes, you’ll play it safe and get the bigger one, but you’ll spend more money and it will take up more space on the shelf. And it won’t be very satisfying, especially if you miscalculated and only had thirty-six (36) ounces to begin with. In short, you didn’t do a very good job of right-sizing your container solution. And that’s exactly what IT administrators have struggled with for years, whether it’s bandwidth, equipment or any type of technology of solution. Unfortunately, right-sizing an IT recipe usually requires a dash of hope.

Pay-as-you-go trumps the guesswork of right-sizing

HPE GreenLake Flex Capacity is a hybrid cloud solution that gives customers a public cloud experience, but with the peace of mind that often comes with on-premises deployments. It’s a pay-as-you-go solution, so right-sizing can become the dinosaur of high-tech industry. HPE GreenLake Flex Cap provides capacity on-demand and scales quickly to meet growth needs, but without the wait times–often long ones–that come with circuit provisioning.

And it gets better―management is greatly simplified; customers can manage all their cloud resources, and in the environment of their choosing. HPE GreenLake customers enjoy:

  • Limited risk by maintaining certain workloads on-prem
  • Better and more accurate alignment of cash flows, no upfront costs and a pay-as-you-go model
  • Savings by no longer wasting dollars on circuit overprovisioning
  • Immediate scalability to address the needs of your network
  • Real-time failure alerts with remediation recommendations
  • The ability to perfectly size capacity

And with these integrated, turnkey packages, your organization can enjoy HPE GreenLake Flex Cap even faster


GreenLake for Microsoft Azure or Amazon Web Services (AWS)

Whether you’re utilizing Microsoft Azure or Amazon Web Services (AWS) for your cloud environment, GreenLake Flex Cap can provide turnkey controls for performance, compliance and costs.

GreenLake for SAP HANA

SAP HANA customers can enjoy a fully managed, on-prem appliance with right-sized SAP®-certified hardware and services to satisfy workload performance and availability. As the leading supplier of SAP infrastructure, HPE GreenLake for SAP HANA delivers the performance, control and security needed for the most demanding of mission-critical applications.

GreenLake for Big Data

GreenLake for Big Data accelerates time-to-value with asymmetric or symmetrical configurations, and there are no security issues or risks associated with repatriation once datasets are shipped to third-party data centers.

GreenLake for EDB Postgres

Reduce TCO and simplify operations with this Oracle-compatible1 open-source database platform. Your teams will be able to better focus on applications and insights that will drive business outcomes.

GreenLake for Backup

Pay for exactly what you back up. Yes, it’s that simple. GreenLake for backup includes Commvault software that’s pre-integrated on your choice of HPE StoreOnce or HPE 3PAR Storage.

Now combine GreenLake with HPE Pointnext

HPE Pointnext can not only monitor and manage the entire solution, but it provides customers with a portal that delivers key analytics and detailed consumption metrics.

Questions? Call on the experts

If you have additional questions or need more information about HPE GreenLake Flex Capacity and the many benefits it can provide your IT organization, contact one of the talented and tenured solutions architects or engineers at GDT. They can be reached at or at They’d love to hear from you.

Answer: You get a solution better than your current one

By Richard Arneson

Question: What happens when you combine AI (artificial intelligence) and Wi-Fi? Apologies to Alex Trebek and Jeopardy, but this particular solution is so cool, exciting and effective that I couldn’t bury the lead and had to skip straight to the answer.

Wi-Fi has been part of our lexicon and lifestyle since 2003 and, no question, it was revolutionary. Connecting your computer to the network without wires…could it get any better than that? The technology remained fairly stagnant and unchanged for several years, however. While any claims that Wi-Fi was stuck in the Dark Ages would have been a gross exaggeration, but it was beginning to feel a bit stale. And with that came dissatisfaction, user (un)friendly experiences and, ultimately, the worst adjective consumers can attach to a technology–frustrating.

It all changed in 2007, though. The launch of the iPhone, including its phenomenally successful marketing campaign, resulted in consumers snapping them up like snow cones on a hot summer day. Hello, smart device. Then came other smart devices—tablets, watches, doorbells, thermostats, et al.–which generate thirteen times (13x) more traffic than non-smart ones. And then came Mist.

Mist Systems

Based in Cupertino, CA., four-year-old Mist Systems was funded by several top investors, most notably Cisco Investments. The folks at Mist wondered why 12.6 billion smart devices worldwide were relying on a technology that wasn’t terribly, well, smart. They set out to develop a learning wireless LAN solution that would, among other features, replace time-consuming, often frustrating manual tasks with proactive automation.

Mist began with three (3) end goals in mind: Improve network reliability, transform IT services and enhance the user experience

Mist set out to fix the ills of Radio Resource Management (RRM), which manages several characteristics inherent in wireless communications, such as whether there is any co-channel interference or signal issues. The problem with RRM is that it has always been hamstrung from a lack of user insights due to poor data collection. Not so with Mist, which utilizes AI to create a Wi-Fi solution that heals itself.

Mist constantly collects, per user, RF (radio frequency) information regarding coverage, throughput, capacity and performance. The collected data is analyzed to proactively through AI make changes to enhance the user experience.

Service Level Expectations (SLEs)

Mist offers the only Wi-Fi solution to the marketplace that allows for SLEs that clients can customize based on their needs. In addition to traditional metrics, such as coverage, throughput, uptime and latency, Mist customers can set, monitor and enforce their defined SLEs, which allows them to better understand just how issues, such as jitter, packet loss and latency are adversely affecting end users.

Here’s why Mist is truly refreshing

Mist offers the only enterprise-class wireless solution that is powered by a microservices cloud architecture and doesn’t require a WLAN Controller. As a result, customers enjoy enhanced agility and scalability from an AI engine that gathers data and insight, and utilizes automation to deliver a self-healing Wi-Fi solution.

Mist introduces customers to MARVIS, their virtual network assistant built on AI, deep learning and machine learning. By using Natural Language Processing (NLP), Marvis provides IT administrators with immediate answers, so time wasted digging for them with Command Line Interfaces (CLIs) or dashboards can be better served on other tasks or projects.

Mist can lay claim to another first―they offer the only Enterprise Bluetooth Low Energy (BLE) solution that doesn’t require manual calibration. And additional beacons aren’t required; Mist developed proprietary virtual BLEs, which through a simple mouse click or API can be moved around as needed.

Mist’s solution provides what Wi-Fi has always aspired to be, and then some―a predictable, reliable and self-healing Wi-Fi solution based on extensive data collection, AI and machine learning.

There are no dumb smart questions

If you have questions about smart devices, IoT, Wi-Fi solutions―including Mist Systems’― contact the talented, tenured solutions architects and engineers at GDT’s IoT and Mobility Solutions practice. They can be reached at They’d love to hear from you.

For more about Mobility Solutions and IoT…

Click here to get more information about mobility solutions, and here to watch a video about how GDT delivered a secure mobility solution to a large retailer.

The 6 (correctly spelled) R’s of a Cloud Migration

By Richard Arneson

It’s always confounded me that two (2) of the three (3) R’s of education―reading, writing and arithmetic—were spelled wrong. Whomever coined the phrase was obviously trying to set students up to fail at spelling. Thankfully, we work in an industry that understands the proper spelling of R words; in this case, I’m referring to the six (6) R’s of a cloud migration. That’s not to say you have to pick just one (1), though. It’s not an either/or scenario. Your organization might require, if you want to fully enjoy the cloud and all it has to offer, several of the following types of cloud migrations. That’s where the experience and expertise comes in.

Re-host (aka Lift and Shift)

Re-hosting applications to the cloud is common, especially if a company wants to ramp up their cloud migration as quickly as possible. For instance, there might be a certain business case that demands a fast deployment. In re-hosting, applications are re-hosted in the cloud, even if cloud optimizations haven’t taken place. As a result, companies can enjoy quick savings, but not everything they might want due to the abbreviated time line.

If workloads and applications have been re-hosted, it can make it easier to optimize and re-architect in the future. Amazon Web Services (AWS) has a solution for this called Snowball, which securely transfers data at petabyte-scale into and out of their cloud. Also, their VM Import/Export automated transfer tool allows you to utilize existing VM purchases by easily importing them into the AWS Cloud.

Re-platform (aka Lift, Shift and Tweak)

Re-platforming takes the re-hosting approach, but also addresses a common issue―not all applications can be migrated to the cloud. While an application may not be able to run on an IaaS platform, it may be able to run on IaaS servers. In this case, an emulator can be used, which runs in the cloud of the provider you choose (AWS, Microsoft Azure, Google Cloud). The applications will appear no different to end users―same front end, interfaces, look and feel.  If rebuilding a current system is cost prohibitive, you can still enjoy cloud technologies on a legacy infrastructure through re-platforming.

Re-architect (aka Re-write)

Re-architecting is like purchasing a Mercedes with all the options and features attached. Yes, it’ll cost you, but if you’re looking for a superior level of performance, business continuity, flexibility and scalability, this will be your best option. It’s a good bet that companies touting and enjoying tremendous cloud benefits have utilized this migration strategy.

And if you initially choose to re-host an application, that doesn’t mean you can’t re-architect it in the future. If you’d like, re-host now, re-architect later. Doing so can reduce the project’s complexity by separating application re-design from the cloud migration.

Re-purchase (aka Drop and Shop)

Think Salesforce. Think SaaS. Re-purchasing is simply a matter of changing the licensing. In the case of Salesforce, you’re going from a legacy CRM to a cloud option. You’ll save both hard and soft costs, such as the time it takes an IT staffer to manage, maintain and monitor the application.

Retire (aka Curbside pickup)

One of the key elements of creating a cloud migration strategy is to first conduct a thorough assessment of your existing environment, applications, workloads, etc. If done properly and comprehensively, the assessment will be able to determine which IT elements can be hauled out to the trash. And with retirement comes cost savings.

Retain (aka You can stay…for a while)

If you’re not ready to move a particular application to the cloud for whatever reason (depreciation, performance concerns, gut feeling…), you may want to keep the status quo for a while. That’s not to say you’ll want to retain it forever. The more comfortable you become with the cloud and a migration, the sooner you’ll probably begin to move applications onto the Retire List.

It all starts with Expertise―then an Assessment

Moving to the cloud is a big move; it might be the biggest move of your IT career. If you don’t have the right cloud skill sets, expertise and experience on staff, you may soon be wondering if the cloud is all it’s cracked up to be.

That’s why turning to experienced Cloud experts like those at GDT can help make your cloud dreams a reality. They hold the highest cloud certifications in the industry and are experienced delivering solutions from GDT’s key cloud partners―AWS, Microsoft Azure and Google Cloud. They can be reached at They’d love to hear from you.


If you’d like to learn more about the cloud, migrating to it, considerations prior to a migration, or a host of other cloud-related topics, you can find them here:

Are you Cloud Ready?

Calculating the costs–soft and hard–of a cloud migration

Migrating to the Cloud? Consider the following

And learn how GDT’s Cloud Team helped a utility company achieve what they’d wanted for a long, long time:

A utility company reaps the benefits of the cloud…finally

Brazil now, U.S. later?

By Richard Arneson

Hopefully the answer is a resounding “NO”, but the Brazilian banking industry has recently been hit hard by “GhostDNS”, so named by China-based security research firm NetLab, which discovered the sinister malware in September. The phishing infection has hijacked over 100,000 routers in South America’s largest country and hoarded customer login information for many of its largest financial services firms. It’s estimated that it has been running undetected since June of this year.

Domain Name Service (DNS) simplifies the lookup of IP addresses associated with a company’s domain name. Users can remember, but servers don’t understand our nomenclature. They need an IP address. Without DNS, the Internet, which processes billions of requests at any given moment, would grind to a halt. Imagine having to keep track of all the IP addresses associated with the thousands of websites you visit, then typing them into a browser.

Here’s how GhostDNS works

GhostDNS is spread through remote access vulnerabilities and can run on over seventy (70) different types of routers. NetLab identified over a hundred (100) different attack scripts that were deployed and discovered them running on several high-profile cloud hosting providers, including Amazon, Google and Oracle.

The attack scripts hijacked organizations’ router settings, which resulted in their traffic being sent to an alternative DNS service. This re-directed traffic headed to rogue, or phony, sites designed to mimic the landing pages of Brazil’s major banks (some telecom companies, ISPs and media outlets were targeted, as well). Users believed they were on “real” landing pages, then happily typed in their user name and password.

While GhostDNS malware has primarily affected routers in Brazil, which is one (1) of the top three (3) countries affected by botnet infections (India and China rank 1 and 2, respectively), the FBI is working to ensure it hasn’t spread to the United States. If you believe your organization may have been infected by GhostDNS, the FBI has provided an easy online way to determine that very issue here. Just type your DNS information into the search box. it’s that simple.

A four-pronged module approach to evil

  1. A DNSChanger module attacks routers that, based on collected information, are deemed target-worthy due to weak or unchanged login credentials or passwords.
  2. A Web Admin module provides1 a portal, of sorts, where attackers can access the phony login page.
  3. A Rogue DNS module resolves the domain names to which users believe they’re heading. Again, most of these domain names are of Brazilian financial institutions.
  4. The Phishing Web module is initiated after the goal of the Rogue DNS module has been satisfied. It then steers the fake DNS server to the end user.

As the result of NetLab’s detective work, the further spreading of GhostDNS appears to have been reined in. Networks have been shut down so remediation and enhanced security measures can be implemented. But rest assured, something as big, or bigger, will soon take its place.

IT Security questions? Turn to the Experts

GDT is a 22-year-old network and systems integrator that employs some of the most talented and tenured security analysts, solutions architects and engineers in the industry. They design, build and deploy a wide array of solutions, including managed security services and professional services. They manage GDT’s 24x7x365 Network Operations Center (NOC) and Security Operations Center (SOC) and oversee the networks and network security for some of the most notable enterprises, service providers and government agencies in the world. You can contact them at They’d love to hear from you.1

A Robust Solution for the Entry-Level storage customer

By Richard Arneson

If your backyard is the size of Greenwich Village apartment, you probably wouldn’t buy a tractor with a mulching attachment to mow the lawn. The same holds true for technology solutions. Why should only the biggest of the biggies get to enjoy best-of-breed, cutting-edge technology solutions? And why should they have to pay higher prices, only to be told that economies of scale prevent them from enjoying more aggressive pricing? Well, based on the recent introduction of its next generation PowerVault ME4 Series family of storage arrays, Dell EMC’s answer is obvious―They shouldn’t.

Small- and Medium-Sized (SMB) companies not only compose the vast majority of businesses in the United States, but they account for well over fifty percent (50%) of all sales. Those figures aren’t lost on Dell EMC; they obviously understand the importance of providing solutions to those businesses who fill up the SMB space. Their PowerVault ME4 storage arrays allow SMBs to purchase storage arrays that perfectly fit their needs and come at a budget-friendly, easily-digestible price point.

A storage solution that meets the demands unique to SMBs

The importance of workloads is every bit as important to small- and medium-sized businesses. Theirs might come in slightly different flavors, however, and can include everything from databases and disk backups, to applications needing a solid SAN/DAS solution and virtual desktop infrastructures (VDI). But smaller companies with fewer IT staffs are met with everything their enterprise counterparts are; namely, the expectation to manage diverse sets of IT infrastructure solutions.

The initial phase of Dell EMC’s goal to deliver a simplified storage portfolio was accomplished in early 2018 when they introduced PowerMax, its enterprise-class solution.

Not their first SMB Storage Solution rodeo

The reason behind Dell EMC’s introduction of PowerMax and PowerVault ME4 can be boiled down to one (1) word―simplification. But that’s not to say they haven’t been delivering great storage solutions for the SMB market. The IDC’s Q2Y2018 Worldwide Quarterly Enterprise Storage Tracker listed Dell EMC as the leader in the entry storage market. In fact, they hold a thirty-one percent (31%) revenue share in this segment. With its introduction of PowerMax and PowerVault ME4, that percentage will soon get larger.

More features, three (3) great options to choose from

The PowerVault ME4 solutions portfolio, while certainly delivering simplicity, features a number of improvement over their previous storage offerings, including larger capacity, faster performance and all-inclusive software.

Dell EMC’s PowerVault ME4 solution comes in three (3) different flavors to accommodate the precise needs of the SMB market. The ME4012 features twelve (12) drives in a 2U (3.5” high) profile and the ME4024, also 2U, comes with twenty-four (24) drives. Its big dog, the ME 4084, is a 5U (19” high) array with eighty-four (84) drives. Their starting price is staggeringly low and can comfortably fit into any IT budget.

The PowerVault ME4 solutions are highly-optimized and purpose-built for SAN and DAS environments, can be configured from 0-100% flash, are expandable up to 4PB and can drive up to 320K IOP. And, as previously mentioned, all include the software you’ll need to manage, store and protect your data. And whether connecting to a high- availability SAN environment or integrated with a Dell EMC PowerEdge Server, simplification is the operative word. And they can be quickly configured with a new intuitive HTML5 web-based interface, so management can be conducted anywhere, at any time.

If the primary word is simplification, protection isn’t far behind

With PowerVault ME4 arrays, RTOs (recovery time objectives) and RPOs (recovery point objectives) can be addressed and met through snapshots and IP replication and asynchronous multi-site FC capabilities. The result? Data protection and robust disaster recovery options.

Need more info about Dell EMC storage solutions?

Turn to the storage experts at GDT. For the past twenty-two (22) years, GDT has been a leading network and systems integrator by partnering with industry leaders, such as Dell EMC, HPE, Cisco, VMware, Pure Networks, Citrix, F5 Networks, and dozens of others. Our tenured, talented solutions architects and engineers deliver customized, cutting-edge client solutions with best-of-breed technologies that lead customers on their digital transformation journey. For more information about your storage solutions options― whether you’re in the SMB or enterprise market―contact GDT’s  solutions architects or engineers at or at They’d love to hear from you.

For more information about storage solutions, read: Flash, yes, but is it storage or memory?

Don’t put off ‘til tomorrow what you must do today

By Richard Arneson

Disaster Recovery planning is like insurance―you know you need it, but there’s nothing fun about it. And that’s before you’ve even paid a premium. It’s easy to file it into one (1) of two (2) categories: I’ll get around to it or It’ll never happen to us. And like insurance, taking either philosophy could leave behind a wide swath of damage from which total recovery may be impossible.

Actually, there’s a third reason disaster recovery planning is often the victim of procrastination―it’s not easy. In fact, it can be very complicated if done, well, properly. But it’s needed; not later, but now.

The following are ideas to consider prior to sitting down to take that first stab at creating a plan. There’s no question, each of the five (5) points will spawn a myriad of additional things to consider, but it will get you headed in the right direction.

Create a DR Team

Developing a Disaster Recover Team that meets regularly, even after the plan has been crafted and tested, will help create a more collaborative, open attitude toward disaster planning. Incorporate a wide-range of skill sets within the team, and each member should have a well-defined role. In addition, each should have backup roles; for instance, somebody whose primary responsibility is applications might have a secondary role working with the telecom department.

Inventory your Assets

An IT inventory must be conducted to include all applications, network assets, applicable personnel and vendors. Create a list of all that will be needed to recover from a disaster. Include network diagrams and any recovery sites, and ensure all equipment, including cables and wiring, are labeled. It might sound elementary, but if it’s not done, tracing cabling back to devices will take time and create unnecessary costs and headaches.

Once you’ve inventoried personnel and vendors, create a call list that―regarding personnel―details their responsibilities and backup assignments. Assign the management of the call list to one (1) person to avoid any blame games. And make sure they’re held accountable for updating it regularly.

Document the Plan

Once inventories have been conducted and verified for accuracy, include any pertinent information, such as software licenses and asset lifecycles. And while it hopefully won’t be needed, include information about applicable insurance, including policy numbers. If you’ve designated a recovery site, include information and maps about how to get there. Don’t leave out something because you assume it’s widely known. If you’re going to assume anything, assume that whomever refers to the plan knows nothing. You won’t offend anybody for including information that seems rudimentary or unnecessary. What will be offensive is if personnel refer to the plan and it’s unclear.

Now Test it…and test it…and test it

Prior to testing your plan, which should be conducted at least once a year, script it out, then rehearse it with key personnel. If you’re concerned that testing the entire plan will pull employees off projects for extended periods of time, test subsets, or smaller chunks, of it. But like anything, the more you rehearse the better you get. You can throw in some curveballs and see how the backup planning works. Pretend certain staff members are on vacation; see if their backup is ready to enter the game and make a difference. Or test it with personnel who have had nothing to do with its creation. Get creative, pretend you’re a football coach. Throw a variety of issues at your plan and personnel and see how well it stands up. See if your documentation is easy to follow and covers all the bases.

Get Executive Buy-In

Make sure to get executives to understand the importance of a DR plan and why taking time to create and test it on a regular basis will mean taking personnel off of projects or initiatives from time to time. Ensure they understand that creating a DR Plan will encompass all departments and key stakeholders from each, and that the plan isn’t static―it needs to be re-evaluated, edited and tested on a regular basis.

Need more info about creating a DR Plan?

Turn to the experts. For the past twenty-two (22) years, GDT has been a leading network and systems integrator by partnering with industry leaders, such as HPE, Cisco, Dell EMC, VMware, Pure Networks, Citrix and F5 Networks. Our tenured, talented solutions architects and engineers deliver customized, cutting-edge client solutions with best-of-breed technologies that lead customers on their digital transformation journey. For more information about creating a DR plan for your organization, contact GDT’s  solutions architects or engineers at or at They’d love to hear from you.

And if you’d like to learn more about DR plans, you can read about them here:

DR in the Cloud

How do you secure a Cloud?

Want to read about a cool, real-world blockchain application? Oh, and it’s not Bitcoin

By Richard Arneson

With a value of almost $300 billion, retail juggernaut Walmart, which includes Sam’s Club, has turned to blockchain to keep customers safe from future produce-related illnesses. It’s estimated that outbreaks related to food borne illnesses, like those that occurred in April due to E. coli-tainted romaine lettuce, result in combined costs over one hundred fifty billion dollars ($150bn) due to medical care, sick days taken from work and discarded food.

Walmart announced that by January of 2020 all their California-based produce suppliers―Dole, Fresh Express and Taylor Farms, to name a few―will be required to join its blockchain-based supply chain, which they’ve been working on and testing for the past two (2) years. They’re confident the technology will make it far easier to trace the source of any produce containing dangerous bacterial strains, such as E. coli, listeria, salmonella and campylobacter. And Walmart isn’t just turning to blockchain to keep customers safe; they estimate that implementing it in their supply chain will save them considerable time and millions of dollars they lose due to recalls.

They’re focusing more specifically on produce suppliers from California’s Salinas Valley region, which is where the vast majority of April’s tainted romaine lettuce was grown. It was reported that the lettuce killed five (5) people and hospitalized over two hundred (200) nationwide. Consumers were advised to refrain from purchasing romaine lettuce grown in California or Arizona, that state’s Yuma region reported to be another, albeit small, source of tainted lettuce. This advisory, while necessary, proved somewhat ineffective as consumers found it hard to determine where their purchased lettuce had been grown. The Yuma-grown lettuce resulted in only a single incident, which occurred at an Alaska correctional facility.

Here’s how Walmart’s blockchain-based supply chain works

Dole, which is Walmart’s largest supplier of produce, has participated in their blockchain trial for almost two (2) years. Blockchain’s distributed ledger concept serves as a decentralized accounting system that will be accessible, once it’s widely deployed, by all of Walmart’s produce suppliers.

The initial block of the chain originates from the grower, after which packers and shippers enter their pertinent information on the next block. At that point Walmart receives the chain, which is entered into its distribution system. All parties involved will be able to see the entire ledger, meaning all blocks attached to the chain. While Walmart’s vendors won’t know which company entered information, they’ll be able to see that a particular element within the supply chain was completed.

Here’s what blockchain will provide Walmart and its suppliers

These recent outbreaks of E. coli have been the worst produce-related ones in history. Current supply chain methods, which are widely used in the grocery industry, meant tracking the source of tainted romaine lettuce took a considerable amount of time. And while the FDA (Food & Drug Administration) and CDC (Centers for Disease Controls and Prevention) were busy trying to track down the source of the afflicted lettuce― which took weeks―it was being distributed and consumed far and wide.

While safety certainly takes precedence, Walmart will also enjoy other benefits, as well, such as faster payments to their produce-related vendors. It will also assist them in determining which products have the longest shelf life. And consumers, who are more in tune with what they’re putting into their bodies these days, will be able to access far more information about the food they’re eating and feeding to their families.

Presently the FDA requires that companies within a grocer’s supply chain have to maintain information only on whomever lies one step ahead and one step after them. This lack of intelligence makes it hard for the FDA and CDC to trace the source of bacterial strains. Add to that the fact that on average there are almost a thousand food-borne illnesses each year (that number will probably be eclipsed in 2018).

Questions about which technologies can help you meet your digital transformation goals?

The talented, tenured technologists at GDT can provide the answers. They’ve implemented cutting-edge solutions for customers of all sizes and from a variety of industries. You can reach them at or They’d love to hear from you!

And check these out…

You can get a little more educated about blockchain here. And click here to watch a great Lunch n’ Learn video presentation on blockchain conducted by GDT Network Engineer Ryan Rogers.

Gen V―very important, but probably not what you think it means

By Richard Arneson

Gen X, Gen Y, NextGen, 5G, 4G…if you could buy stock in the number of ways Gen has been used, I’d be the first to reach for my checkbook. Here’s another one, and may possibly be the most important―Gen V.

Gen V is what Checkpoint, a 25-year-old leading provider of cyber security solutions, dubbed the latest generation of cyber threats. Just yesterday, in fact, there were over 12 million attacks…and that wasn’t even an outrageously intense day in the cyber threat world. You can get more details at ThreatCloud,  Checkpoint’s worldwide threat map. It’s fascinating, but bone-chilling scary when you see the threat totals at the upper left.

To fully understand Gen V, you might find it useful to learn about Gen’s I through IV

Gen I

Remember how awesome it was to have you own PC, only to have that excitement spoiled, at least somewhat, once your learned about hackers? The bad guys launched viruses, and the nascent IT security industry returned serve with anti-virus products. Simple enough.

Gen II

Once the Internet became as much a part of lives as central heat and air, the hackers followed suit. The Internet allowed them to communicate better, collect information easier, and raise the stakes to benefit financially. Gen II allowed maliciousness to reach a much broader audience by ushering in software that could be launched corporate-wide. A single, infected PC would result in widespread, crippling infections. Security vendors responded with intrusion detection systems (IDS) and firewalls.


Not surprisingly, attackers eventually found a way to breach those firewalls and intrusion detection systems and did so, in part, by becoming experts at analyzing victims’ software and networks. This resulted in the IT industry determining that a more active, less reactive, approach to security was needed. For instance, Checkpoint began to focus on better preventative measures and launched their IPS (intrusion prevention systems) products.

Gen IV

With Gen IV, threats became more sophisticated and resulted in everything from breaches that exposed personal information to national security threats, including―gulp!― international espionage. Gen’s II and III resulted in better inspection of traffic but failed to inspect and validate content that could be included in emails and downloads. Checkpoint responded with sandboxing and anti-bot products that beautifully addressed this new level of maliciousness, including zero-day attacks, which refer to flaws that organizations didn’t even know existed. They’re called zero-day attacks because they can be exploited immediately, providing victims zero time to create and load necessary patches to address vulnerabilities.

Gen V Attacks―when the bad guys bring out the big guns

If Gen I through IV attacks are guns, tanks and rocket-propelled grenades, Gen V represents bombs of the atomic or nuclear variety. Wide-scale infection and destruction ensues from Gen V attacks, as blistering, multi-vector attacks are covertly leaked and launched. The resultant casualties can number into the millions, as prior Gen tools and product sets prove no match for this new, heightened level of digital evil. Checkpoint determined that security needed a more integrated and unified approach to security. They developed a unified architecture with an even higher level of advanced threat protection, and included the sharing of real-time threat intelligence. Their Gen V security solutions address customers’ mobile devices, their use of the Cloud, remote offices, even virtual endpoints.

The Security Check-Up

GDT’s July 17th blog entitled Rx for IT Departments: a Security Check-Up addresses the importance of conducting a security check-up for your organization. To dovetail with that, Checkpoint provides an online security tool called CheckMe. It runs simulations to test the security of your network and its endpoints, including your organization’s use of the cloud and mobile devices. And it comes at the perfect price of free!

Call on the security solutions experts

GDT’s tenured and highly certified security professionals are experienced at implementing managed security solutions, including those from premier partner Checkpoint. After years of working closely with Checkpoint, it comes as no surprise that they have been recognized for the 7th year in a row as a leader in Gartner’s annual Magic Quadrant for Unified Threat Management (UTM). For more IT security information, contact GDT’s security professionals at They’d love to hear from you.

Why should I care about 5G?

By Richard Arneson

Like the G’s that have preceded it, 5G has gotten a lot of press and pub for seemingly years. In the IT industry, however, months can feel like years. Eager technophiles are anticipating the day when they can use―then proudly broadcast that to the world―whatever technology we’ve been hearing about for months and months. Welcome to the current hottest of topics―5G.

But first, a quick walk down the memory lane of mobile phones


1G was the first generation of wireless communications. Actually, using the word communications is a little misleading. It suggests that there was more than one (1) type of communication; 1G delivered only voice. Think back to the 1980’s when cells phones first became available. It felt like only the top 1% of wage earners had one. The cell phones were heavy and comparable in size to a World War II field phone. They couldn’t fit in your pocket and could only be stuffed into a briefcase with expanding sides. These phones weren’t even digital, but analog, and the battery seemed to always last less time than the call you were on.


Introduced in the early 1990’s by the Finnish, 2G provided something so cutting edge at the time that people used its key feature to transmit things like “Hi”, “Hello”, “Are you getting this?” and “Do you believe this actually works?” Yes, it marked the advent of text messaging. Also known as SMS (short message service), its next evolution, MMS (multimedia messaging service), allowed pictures, audio and video to be attached to text messages and transmitted. The max speed went from 1G’s 2.4 Kbps to 50 Kbps. Incomprehensible…at the time.


Not to give short shrift to 1- and 2G, but 3G, which was introduced to the marketplace in the late 1900’s, was arguably the first “next generation” in which the general public really began to take notice. And why not when speeds shot up to 2Mbps and it marked the first time the words mobile and broadband were linked together. Users began to use their phones to access the Internet and stream content. It also accompanied what some at the time (OK, I was one of them) considered a little crazy, something that would rarely be used, if ever, on a cell phone―the camera.


In 2008, 4G, our current standard, perfectly helped usher in the smart phone. It delivered speeds up to 100 Mbps, which was required considering consumers began using smart phones for gaming, HDTV and videoconferencing…all those applications that demand crazy high-speed data transmission. Remember Apple’s 2007 introduction of the iPhone with its “Hello” advertising campaign that first aired during the Academy Awards broadcast?

Hello 5G

While not quite here, 5G is just around the corner. Here’s what it will mean to consumers:

Faster speeds

5G touts the delivery and downloading of data much, much faster, which is a feature that shouldn’t come as a surprise to anybody. A new generation of wireless without faster speeds would be like a new music technology that doesn’t profess clearer and more dynamic sound. Speeds for 5G are supposed to be over ten times (10x) that of 4G, or around 1 Gbps.


Latency, or the time it takes to move data from device to device, will be greatly reduced with the introduction of 5G. While 4G might be fitting the bill for your current needs, lower latency will prove critically beneficial, even lifesaving, for certain applications, such as surgery or the need for real-time data delivery to and from connected cars.


Faster speeds, lower latency…both can be chalked up to the need for each in the IoT world. In the next four (4) years, the number of IoT devices in use today (17 billion) will double, and with that precipitous growth comes the need for more cells to pick up and transmit the data. With 5G, smaller amounts of data will be transmitted by lower frequencies, while larger, bandwidth-hogging amounts will occur at higher ones. These multiple frequencies will require service providers to deploy smaller, but densely packed, cells on existing towers. These cells will determine the type of data, and its resultant frequency, that needs to be transmitted.

But before you get your credit card out…

It’s estimated that 5G won’t be fully deployed until 2022. Remember, the service providers don’t, and can’t, roll out a next generation wireless technology at once. They have a lot of cell towers to upgrade, so it’s implemented in stages. But all of the major carriers will begin 5G implementation in selected markets by the end of 2018―yes, that’s this year and only three (3) short months away.

Mobility and IoT Experts

If you’d like more information or have questions about what 5G can and will mean to your organization, contact the talented, tenured solutions architects and engineers from the IoT and Mobility Solutions practices at GDT. They can be reached at They’d love to hear from you.

For more about Mobility and IoT…

Click here to get more information about mobility solutions, and here to watch a video about how GDT delivered a secure mobility solution to a large retailer.

Busting myths about fiber optics

By Richard Arneson

How often do you and your buddies sit around and talk about fiber optics? That little, huh. It would be a bit like chewing the fat about your home’s electrical wiring. Sure, it could happen, but conversations related to politics, sports, religion, et al. will probably trump wiring every time. Fiber optics is a lot like electricity―it’s been around a long time, is reliable, and we only talk about it when it doesn’t work. Oh, and life without it just may prove unlivable. For instance, if you’re thinking you’ll use your smart phone to hop on the Internet or make a phone call, it won’t be possible without fiber optics. While you don’t see fiber stands dangling from your smart phone, there’s a little thing called wireless backhaul. After your wireless voice or data hits the nearest cell tower, those 1’s and 0’s are carried back to the service provider’s network via…fiber optics. And that’s just a small example why fiber optics, whether you realize it or not, is as critical to our way of life as electricity.

So in the event you hear any of the following disparaging remarks about fiber optics, rest assured they’re all myths.

Myth 1—Fiber optics is glass…of course it’s fragile

Just the word fiber should be enough to debunk this myth. Think about fiberglass and its many durability-required uses. It’s composed of glass fibers and wraps the car you drive. Fiber optics, when compared to its copper counterpart, is considerably more durable. While tugging on it isn’t recommended, its pull tension is much stronger than copper or coax. And it’s far better equipped to handle the wide array of environmental conditions that are thrown at it. Consider water, for instance. Copper carries signals electronically―not good when mixed with water. Not true with fiber optics. It carries signals with a beam of light. Try this one on for size: the fiber optics used outdoors has a 600- to 800-pound tension rating. Not to suggest that you can swing on it, but it’s super strong. Busted.

Myth 2—Fiber optics is very pricey

This myth was once true, at least partially, but at present installing fiber optics is comparable to the cost of installing copper or coax. Its price has steadily decreased due, in part, to advances in signal termination technology—cheaper and more efficient. Also, less equipment is needed for fiber networks, and, because it doesn’t utilize electricity, it can even lower your utility bills. Busted.

Myth 3—Fiber optics installations are difficult

Like the price myth, this one was at one time factual. But that fact dies sometime in the mid-1990’s. For years fiber optics has been the standard of choice for service provider backbones. If field operations personnel aren’t comfortable with working on and installing fiber optics by now, their skill sets are about twenty (20) years behind the times. And due to fiber optics’ lack of an electrical current, there are fewer routing restrictions and no need to worry about electromagnetic interference (EMI) and radio frequency interference (RFI). Busted.

Myth 4—Bend it and you’re cooked

There was a time when fiber optics was more sensitive to bending, but this has always been a myth. Yes, it was once a little less bend-friendly, but now insensitive fiber is used in the event a super tight radius is required. This is just one of the many reasons why fiber optics is so amazing. Insensitive fiber has a trench that surrounds the fiber, but is inside the cladding encasing it. This tiny trench is highly refractive, so any light that escapes the fiber due to a tight radius is refracted back to it. If you could bend a mirrored tube around, say, a telephone pole and shine a flashlight in one end, light would exit the other, right? This is very similar to how insensitive fiber works, except that, technically, mirrors reflects light and fiber optics refracts it. Busted.

As a side note, insensitive fiber is solely used indoors; outdoor applications will never require that tight of a turn radius. If it does, its layout has been poorly planned.

Now for some quick FACTS about fiber optics

It’s super-fast (only slightly slower than the speed of light), has far less attenuation (signal loss) than copper or coax, is impervious to EMI and RFI, doesn’t pose a fire hazard, and doesn’t require replacement nearly as often as coax or copper. Are those some of the many reasons why fiber optics will be around and continue to be vital to our lives for a long time to come.

For questions, turn to these optical networking experts

If you have questions or would like more information about fiber optics or optical networking, contact GDT’s Optical Networking practice professionals at Composed of experienced optical engineers, solutions architects and project managers who specialize in optical networks, the GDT Optical Networking team supports some of the largest service providers and enterprises in the world. They’d love to hear from you.

For additional reading material about fiber optics, check these out: A fiber optic first and When good fiber goes bad.

Blockchain―it’s more than just Bitcoin

By Richard Arneson

Think back to that Accounting 101 class you took in college. As an English major, I found the class to be miles on the other side difficult. I thought I’d accidentally signed up for the CPA prep course. But the first thing you learn is (let’s all say it together) Debits to the left, credits to the right. What you add or subtract from one side, you do the opposite to the other. Add in a credit, and subtract that amount from debits, and vice versa. And with that, a ledger has just been described, which is exactly what Blockchain is. Blockchain’s first and most widely publicized product is Bitcoin, a cryptocurrency that makes a ledger available to anyone, whether they’re involved in a transaction or not. However, this public ledger doesn’t disclose parties involved in any of the transactions.

Blockchain was created in the late 1990’s and is a comprehensive listing of records linked, or chained, together. Bitcoin runs on the Blockchain platform and blends together the worlds of technology and finance. It was created in 2008 by Satoshi Nakamoto, a pseudonym for either a person or group of people―nobody’s quite sure which. Bitcoin has been one of the most talked about topics in years for two (2) reasons:

  1. Tremendous gains for investors trading (as in buy low, sell high trading) in Bitcoin have been widely reported, even though news about the hefty transaction fees administered by Bitcoin exchanges have been reserved for the back page.
  2. Bitcoin has been the primary currency demanded by those who launch ransomware due to the erroneous belief that they’re untraceable. They can be, but I’ll share that for a future blog.

In Blockchain, including Bitcoin, of course, those spaces in which you enter debits or credits are called―appropriately―blocks. And those blocks are chained together (name make sense now?). Each block contains a user’s unique identifier and information about both the transaction and the block that precedes it. Each block further strengthens the chain by verifying the previous block. The more blocks, the more times the chain gets verified. And because Bitcoin, like Blockchain, is a distributed ledger and not a database, it can’t be altered or accessed.

How are Bitcoin transactions conducted?

Each Bitcoin user has an account, known as a Bitcoin wallet, in which their Bitcoin balance and information about all their transactions is maintained. If a user needs to send Bitcoin to another user, they publish their intent to do so, after which Bitcoin nodes receive the information and verify that the sender has enough money in their wallet and hasn’t already sent it to somebody else. Once that’s completed, a block is created that includes the sender’s identifier, information about the transaction, including the recipient’s unique identifier, and the preceding block in the chain.

Bitcoin uses Blockchain, but Blockchain is more than just cryptocurrency

Bitcoin and Blockchain are mistakenly used interchangeably. Blockchain is a platform utilized by Bitcoin. In fact, Bitcoin is only one (1) of hundreds of applications that utilize Blockchain. While the word ledger brings to mind numbers, Blockchain can provide a ledger, of sorts, for other things, including contracts, land registries, medical records, music rights for privacy prevention, and many, many more.

Oh, and about those photos…

Blockchain is a fairly straightforward technology, but, in the case of Bitcoin, those stock photographs of shiny gold Bitcoins posted in just about every article you’ve seen on the subject have only added to any confusion. Remember, Bitcoin is virtual currency and utilizes cryptography to secure and verify transactions. No, there are no physical, tangible Bitcoins. You can’t stuff them in your pocket, lose them between sofa cushions or find them at the bottom of your clothes dryer.

If you’re still in need of a visual, this might help—Click Here to watch real live Bitcoin transactions. Whether you consider this fun is up for interpretation, but if you need the same info and with a more appealing, albeit less detailed, format, you can find it at

Questions? Turn to the Experts

GDT is a 22-year-old network and systems integrator that employs some of the most talented and tenured solutions architects and engineers in the industry. They design, build and deploy a wide array of solutions, including managed services, managed security services and professional services. They operate out of GDT’s 24x7x365 Network Operations Center (NOC) and Security Operations Center (SOC) and oversee the networks and network security for some of the most notable enterprises, service providers and government agencies in the world. You can contact them at or at They’d love to hear from you.

And for more great information about Blockchain and Bitcoin, watch GDT Network Engineer Ryan Rogers conduct a great Lunch ‘n’ Learn presentation on both here.

The European Union and cookies…not exactly a love story

By Richard Arneson

To detail in a book the benefits that the digital age has delivered over the past twenty (20) would make Moby Dick look like a brochure. In a much, much smaller book would be a list of any negative ramifications, most of which would fall under the label Security. Here’s a third book: Annoyances. Sure, they’re far outweighed by the benefits, but they’ve afflicted everybody who’s turned on a computer, smartphone or tablet to access the Internet.

For years it was buffering, which left the user waiting and waiting―then grabbing coffee while waiting―as the small hour glass or spinning circle ostensibly meant your request was being processed. And how about the slow dial-up Internet connections, those noisy, awkward network handoffs, and the pop-ups, which are electronically akin to billboards randomly popping up in front of your car and bringing it to a grinding, screeching halt. We’ve got a new one—making the digital scene en masse is a new pop-up: The Cookie Consent Banner, brought to you by the European Union (EU).

Cookies, those of the electronic variety, have been around for years, and for the most part went unnoticed. You’d set up your browser to accept, not accept, or confirm their download before proceeding, but once that decision had been established in the browser settings, they didn’t provide much of a speed bump in the road. Cookies are small files that are essentially lookup tables and hold simple data, such as the user’s name. If accepted, they can be accessed by both users’ computers or web servers, and provide a convenient way of carrying data from session to session without having to re-enter the information.

In the past couple of months, however, the subject of cookies has been revitalized. Click on certain websites and you’re suddenly face-to-face with a pop-up banner that alerts you to the fact that the site utilizes cookies. Yep, a speed bump.

Why is the cookie consent banner showing up all the sudden?

The European Union, which was established in 1993, was an attempt to buoy the competitiveness of twenty-eight (28) member countries. It eliminates trade and monetary borders between EU countries, making for an easier flow of goods and services. And, yes, they established the euro, which is, behind the U.S. dollar, the most commonly held form of currency in the world. But in 2002, they took on another pet project―cookies. They determined that Internet users’ privacy wasn’t being adequately protected and cookie disclosure wasn’t being communicated. Hence came the EU’s Cookie Law, which is officially known as the 2002 ePrivacy Directive (ePD). The Cookie Law, or ePD, was not really a law, but a set of goals. It was up to each of the EU members to draft and enforce their own legislation based on these goals―most didn’t. Enforcement was minimal, if at all. See toothless.

In 2011, the EU enacted the ePrivacy Regulation (ePR), which, as its name suggests, actually is legislation that can be enforced EU-wide. The ePR incorporated other elements, as well, such as marketing efforts related to email, faxes, texts and phone calls. Unless you were directly affected by it, the ePR flew well under the radar. That is until 2017 when the EU updated the ePR and selected May 2018 as its launch date to coincide with that of The General Data Protection Regulation (GDPR). While the GDPR is not technically a subset of the ePR, it is somewhat overlapped by the latter, but focuses solely on users’ personal data. The ePR is broader in scope and protects the integrity and confidentiality of communications and data even if it’s not of a personal nature.

The good news? The ePR has already stated that in 2019 they’re going to introduce simplified cookie rules and make cookie consent a more user-friendly experience. Simplified cookie rules? More user-friendly cookie consent? Yes, it sounds like the EU considers the cookie consent banner an annoyance, as well.

Questions? Turn to the Experts

GDT is a 22-year-old network and systems integrator that employs some of the most talented and tenured solutions architects and engineers in the industry. They design, build and deploy a wide array of solutions, including managed services, managed security services and professional services. They manage GDT’s 24x7x365 Network Operations Center (NOC) and Security Operations Center (SOC) and oversee the networks and network security for some of the most notable enterprises, service providers and government agencies in the world. You can contact them at or at They’d love to hear from you.

When being disruptive is a good thing

By Richard Arneson

The Innovator’s Dilemma is a fascinating book written in 1997 by Clayton Christensen, a Harvard professor who coined the term disruptive technology. He considered it one (1) of two (2) technological categories, the other being sustaining technology. Christensen defined disruptive technology as any that, while being new, were so cutting-edge that they hadn’t yet been fully developed and thoroughly tested. As a result, he insisted, they might not be ready for prime time. Disruptive technologies create a lot of buzz and are rife with exciting possibilities, but aren’t viewed as being as “safe” as their sustaining counterparts. Imagine those days when consumers first heard about technology game-changers radios and televisions. They were highly disruptive and littered with issues.

Christensen lists sustaining technologies as those ones that, conversely, are being utilized and have resulted in measurable, sustainable results. If you’ve worked in telecommunications (especially in sales), there’s a decades-old axiom in that industry―nobody ever got fired for using AT&T. In other words, AT&T has been around the longest, has been used the most, and is considered the safest choice. If a CIO questions you about why you selected AT&T to carry your voice and data traffic, it’s an easily defensible decision. Getting a Fortune 100 Company to dive into the world of disruptive technologies may prove difficult. They’ll be far less inclined to utilize something that promises, but hasn’t yet produced, quantifiable results. Once it has, the flood gates will soon open. Oh, and by that time it will have become a sustaining technology.

The smartphone might be the most disruptive of technologies since the introduction of the telephone at the turn of the (last) century. Telephones disrupted several industries, putting a dent in paper manufacturing and the U.S. Postal Service. Now consider the smartphone. It has devastated an array of industries, including photography, publishing, music, GPS devices, even calculators.

The current biggies in the Disruptive Technologies category

Artificial Intelligence (AI)

AI, while being highly disruptive, frightens a lot of people. Whether they’ve been spooked by fictional, sinister robots of yesteryear, are worried about what it may do with its mass of collected data, or concerned that it will sound an employment death knell for a variety of industries, AI’s promotion to a sustainable technology will be here before you know it. Two (2) years ago, academicians and industry experts at the International Conference on Machine Learning predicted that by 2025 AI will outperform human thought. Wow.


Blockchain cryptocurrencies, such as its flag bearer Bitcoin, are no longer just a cryptic form of currency exchange that is preferred primarily by those who diabolically launch and hope to gain from disseminating ransomware. Many large, established banks worldwide are developing cryptocurrencies for selling financial products, such as bonds. Speaking of bonds, the SEC (Securities and Exchange Commission) now has a crypto-bond offering they’re calling Bond on Blockchain. And fund managers are now incorporating cryptocurrency into their portfolio mix.

Li-Fi (Light Fidelity)

In the event you haven’t heard of it, it’s cool and very disruptive. Called Li-Fi, which is short for Light Fidelity, light bulbs, of all things, will replace your home router. Deemed at to be at least a hundred times faster than Wi-Fi, Li-Fi utilizes an LED light bulb affixed with a digital processor that sends data with emitted light. Yes, the data is in the light. Let your mind wonder for a moment to consider how disruptive Li-Fi could be for any number of industries.

Need more info?

For the past twenty-two (22) years, GDT has been a leading network and systems integrator by partnering with industry leaders, such as, among many others, HPE, Cisco, Dell EMC, VMware, Pure Networks, Citrix and F5 Networks. Our tenured, talented solutions architects and engineers deliver customized, cutting-edge client solutions with best-of-breed technologies that lead customers on their digital transformation journey. For more information about the IT industries’ wide array of technologies, both disruptive and sustaining, you can contact our solutions architects and engineers at either or They’d love to hear from you.

Read about the differences between AI, Machine Learning and Deep Learning and Pure Storage’s answer to AI–Flashblade.

Sexy, yes, but potentially dangerous

By Richard Arneson

Apologies for the headline in the event you’ll soon label it as an act of sensationalism, but the topic of today’s blog needs to be considered, then forwarded, if you or others you know have implemented, or are in the planning stages of implementing, your organization’s IoT strategy. The IT industry is rife with two- to four-lettered initialisms or acronyms―SDN, BYOD, SLAM, SAN, BGT, CRC, IBT… we’ll stop there; this might be a list that is actually never-ending.

Unlike AI (there’s another one), which for some conjures up negative images, IoT (Internet of Things) is rarely the subject of similar scrutiny. IoT is exciting—sexy by IT standards—for several reasons, and one of the biggest is its ability of enable business owners to reach out to customers who might be standing outside their place of business, whether a storefront, bar or restaurant, at that very moment. Yes, when a technology can drive revenue, it’s always going to be a hot topic. But with the good comes bad, at least in the IT industry. And that bad usually falls under the heading Security. IoT, sadly, is no different, and the following represent the greatest present threats to IoT security.

The most prevalent types of security threats that affect IoT

Identity Theft

Identify thievery requires one (1) primary element―lots and lots of data. Now consider the number of IoT devices at play in addition to Smart phones―doorbells, thermostats, utility meters, watches, et al. They’re all connected to networks, which immediately broadens your attack surface. With personal data comes information, which can usher in a host of vulnerabilities. If patches or updates aren’t downloaded, or if, for instance, Alexa is traversing the same network you’re utilizing for Internet connectivity, you’ve created or broadened any gaps in security.

Con Artistry

Most consider themselves immune to this type of threat, but there’s certainly been victims who’d once believed that very thing. Protecting yourself against con artists sounds common sensical, but considerable IoT threats involves the inadvertent coughing up of sensitive information to those posing as bank employees or customer service representatives of a company you’ve done business with in the past. Usually these types of cons come in the form of email phishing, and the broad nets perpetrators cast are considerable.

Distributed Denial of Service (DDoS) attacks

When a highway, or any type of thoroughfare, is shut down, you’re denied the service that roadway provides. DDoS attacks are no different. They’re usually due to a botnet, which floods networks with requests sent at the same time by way more users than the network can accommodate. The thoroughfare comes to a grinding halt, but the goals of DDoS attacks have less to do with data gathering, and more with lost revenue and customers, including the sullying of a company’s good reputation that may have taken years to build.


The aforementioned botnet is a combination of networked systems that take over a network and spread malware like the flu. The newly installed malware can result in a variety of costly symptoms, including the gathering of personal information and the spread of DDoS and phishing attacks, to name a few. The combined systems make botnets more insidious, as attacks can be spread from a variety of sources.

The Man-in-the-Middle

Remember the game Monkey in the Middle, where player C stands between players A and B and tries to intercept or block their pass? Man-in-the-Middle threats represent player C, which attempts to disrupt communications between users A and C. Here’s the difference: in a Man-in-the-Middle attack, users A and B don’t know there’s a user C in the game. Communications between the two (2) users are not only interrupted, but user C can then mimic users A or B―or both―to gather important and sensitive information. Intrusion detection system (IDS) are probably the best preventative measure against Man-in-the-Middle attacks and can detect when user C tries to insert itself into the conversation.

The IoT Industry is growing; unfortunately, so is its Attack Surface

It’s estimated that worldwide the number of IoT devices in use today will more than triple in the next seven (7) years, precipitously growing from its current 23 billion to over 75 billion by 2025. The cat and mouse game that steadily pits security organizations and experts against cyber attackers will only intensify. That’s exactly why consulting with IoT and security professionals like those at GDT is critically important now, but will become even more so over time. GDT’s Security practice is comprised of talented, tenured security analysts and engineers who protect the networks of organizations of all sizes, and from a wide variety of industries, including service providers and government agencies. They can be reached at They’d love to hear from you.

The “App” is short for appliance, not application

By Richard Arneson

In 1992, several years prior to the Bubble and when cell phones were the size, shape and weight of a canned ham, a company was born in Sunnyvale, California, located at the bottom tip of the San Francisco Bay. NetApp was the brainchild of three (3) individuals who had once worked for Auspex, a company against which they’d soon compete, and, just a decade later, help send into Chapter 11 and the OEM scrap heap.

The Evolutionary Disruptor

Self-described as “the data authority for hybrid cloud,” NetApp made news in 2017 with its entry into the highly competitive Hyperconvergence Integrated Systems (HCIS) Market. In fact, their entry prompted Gartner to name them as an Evolutionary Disruptor in its 2017 Hyperconverged Integrated Systems (HCIS) Competitive Landscape study.

Originally, NetApp determined that they couldn’t optimally deliver to VMs the true value of its SolidFire Element OS, their proven storage OS. Once they made that determination, they knew that entering the HCIS Market was in their near future. This soul-searching helped them realize that, architecturally-speaking, it made a lot more sense to package its Element OS on bare-metal storage nodes so customers could take advantage of:

  • NetApp’s all-flash architecture,
  • Performance predictability through Quality of Service (QoS), and
  • Compression and Inline Deduplication across entire clusters.

Along with delivering the many benefits that HCIS delivers―the ability to better address exact compute and storage needs, rapid scaling and more predictable storage through more efficient consolidation―NetApp’s solution utilizes VMware’s bare-metal hypervisor (ESXi) on compute nodes and, along with a simplified installation process, customers can get their HCIS system up and running fast.

NetApp’s management UI enables customers to leverage any management technologies they’re currently utilizing, including VMware’s vCenter and vRealize for orchestration.

NetApp’s HCIS options

The NetApp HCIS offering starts with a minimum two (2) chassis, four (4) storage node configuration, after which additional nodes can be added independently. It offers four (4) flavors:

  • Small compute (16 cores) with small storage (5.5TB capacity),
  • Medium compute (24 cores) with medium storage (11TB capacity), and
  • Large compute (36 cores) with large storage (22TB capacity).

Taking advantage of its large base of installed customers

ONTAP is NetApp’s proprietary data management platform for its storage arrays, such as FAS and AFF, and that, combined with its SolidFire Element storage OS, allowed it to tap into a large base of existing customers and provide an ideal launching pad for its HCIS solutions.

Need more info? Reach out to the experts…

GDT’s team of highly skilled and talented solutions architects and engineers have deployed hyperconverged solutions for customers of all sizes, and from a variety of industries. They’re experts at delivering HCIS solutions from many of GDT’s premier partners, including  NetApp, of course, and helping customers enjoy the many benefits of hyperconvergence. They can be reached at or at They’d love to hear from you.

Read more about hyperconvergence here:

The Hyper in Hyperconvergence

Composable and Hyperconvergence…what’s the difference?

Hypervisor shopping? Consider the following five (5) things before taking out your wallet

By Richard Arneson

Whether you’re looking to implement a virtualization strategy, or are in the market to replace your current solution, you’ve got a decision to make―which hypervisor should I purchase? Remember, hypervisors are basically a platform for VMs and abstract physical resources from host hardware, such as, among other things, memory and processor. And each of the physical resources can be abstracted for each of the virtual machines at a physical level. For instance, a single server can be virtually turned into many, which allows multiple VMs to run off a single machine. (Click here for a refresher on the difference between hypervisors and containers).

First, determine which type of hypervisor you need

If you need to buy a bicycle for your upcoming 3-week mountain biking trip through the Sierra Nevada, you wouldn’t go shopping for a road bike with super narrow tires that can barely withstand the pounding of a pebble. You want a mountain bike that will fit the experience and help keep you upright while speeding down rocky, abandoned fire roads. You want the bike that will give you the best chance of success, enjoyment and safety. Hypervisors are no different.

While hypervisors perform an extraordinary service, there’s no doubt that naming the two (2) varieties was given little thought. Here they are―Type 1 and Type 2.

A Type 1 hypervisor is also referred to as bare metal, which simply means that it runs on the customer’s hardware. Type 1 hypervisors are the faster of the two (2), and require no OS acting as an intermediary, or middle layer, to slow it down.

A Type 2 hypervisor runs as a separate computer program on an OS, such as Windows or Linux. While they perform slower, they’re much easier to set up and great if a test environment needs to be quickly spun up.


If commodity isn’t the most over- and misused term in the IT industry, then it’s got to be a close second. There are some with the temerity to claim that hypervisors are a commodity, and that there’s little difference from one (1) to the next (it’s a pretty good bet that their sales numbers will somehow benefit from that uninformed characterization).

After determining the type of hypervisor you’ll need, it’s time to decide which is more important, high availability or flexibility, as in the need squeeze every ounce of performance from, as an example, CPU and RAM?

Hypervisors, unlike commodities, vary greatly from manufacturer to manufacturer. They’re complex, which is a given considering what they do, including, but not limited to, virtualizing all hardware resources, managing and creating VMs, handling all communications between VMs, and creating resource pools and allocating them to specific VMs. A commodity? Yeah, right.

Management Tools

If “hands-on” describes your VM management philosophy, then determining which hypervisor provides the best and/or most management tools should be a consideration. And those tools in question don’t just refer to ones of the out-of-the-box variety; understanding what are available as add-ons from 3rd party developers should represent part of your purchase criteria.

Overall Environment

If you think you’ve found the mountain bike you’d like to buy, but its support, documentation, and ability to utilize 3rd party accessories are limited, you might want to reconsider. The same holds true for hypervisors. If its support, including documentation, active and easily accessible user community, and ability to accommodate 3rd party developers is limited, this should weigh into your decision. That’s not to say you’ve found a lemon and it should be stricken from the mix, but deficiencies in these areas could prove frustrating, even costly, down the road.

Oh, yeah, the cost…

Pull out your paper and pen for the Pro’s & Con’s List. In short, you’re looking to strike the perfect balance between functionality and cost. Here’s where it gets tricky―the price range of hypervisors is wide, as in Pacific Ocean-wide. Some are not only priced to move, but are practically given away. Also, make certain you understand any associated licensing.

And, yes, you can utilize hypervisors from multiple vendors, but management tools, for instance, will vary from vendor to vendor, making management more complex. But if, for instance, certain workloads are less mission-critical than others, using different hypervisors might be the way to go.

…or you can turn to the hypervisor and hyperconvergence experts at GDT

The talented solutions architects and engineers at GDT have implemented a wide array of solutions for organizations of all sizes, including enterprises, service providers and government agencies. They are highly skilled at implementing solutions from GDT premier partners, including VMware for hypervisors, and hyperconverged solutions from HPE (SimpliVity), and Cisco (HyperFlex). You can reach them at or at They’d love to hear from you.

Disaster Recovery (DR) in the Cloud

By Richard Arneson

When organizations first began to realize that they’d become reliant on their computer systems, a new service was invented, or, at least, was needed―Disaster Recovery. Prior to that, disaster recovery meant little more than making sure your insurance premiums were paid up. This new reliance on computers―primarily due to mainframes in the early 1970’s―resulted in IT professionals beginning to ask themselves the same question: What happens to all of our vital information if [fill in the blank] happens? The first company to answer that question was SunGard, which provided customers exact, functioning duplicates, or “hot” sites, of their existing infrastructure. If the primary went down, the secondary was used. SunGard’s solution served its purpose, but was expensive and immediately doubled customers’ infrastructure costs. Soon scaled-down solutions were offered (“warm” and “cold” sites), which replicated only those portions of the infrastructure that were required to remain operational at all times. Still expensive.

Over the years, there have been a spate of DR solutions, from physical, tape backups that need to be stored off-site, to redundant WAN circuits linked to replicated networks hosted at 3rd party data centers. Regardless of plan or strategy used, there are several elements of DR that most in the industry have always agreed on―DR planning is time-consuming, tough to orchestrate, expensive to test, and definitely not the most glamorous or glorified of responsibilities in the IT industry. DR is a little like being the deep snapper in football. You never hear about the good snaps, only the ones that go over punter’s head and out of the back of the end zone. Not much glory in that.

Here’s what utilizing the Cloud for DR provides…

Quicker Recovery Times

Backing up to the cloud enables customers to recover in a matter of minutes, as opposed to days, sometimes weeks in the event a legacy DR plan is being utilized. Virtualization delivers entire servers, operating systems and applications to a virtual server that can be backed up or copied to an offsite data center. And that virtual server can be spun up on a virtual host in the event a disaster creates the necessity.

Easy Scalability

As opposed to traditional DR solutions (tape backups, redundant data center), utilizing the Cloud for DR means enjoying the flexibility of easily scaling storage capacity up or down based on exact business needs at that time.

Enhanced Security

Arguably the most common myth about the Cloud has to do with security. Actually, that may be one of the best benefits of the Cloud, as things like patch handling and security monitoring are delivered by Cloud providers, such as Azure, AWS or Google Cloud.

Significant Savings

The Cloud’s pay-as-you-go model is incredibly appealing, especially considering the IT industry has been saddled for years with the guilt that comes from waste and inefficiency. Right-sizing any solution has always been the bane of IT professionals; the Cloud provides an answer to that.

Give the Cloud experts a call

If you have questions or concerns about creating and/or implementing a DR plan that will entirely, or partially, incorporate the Cloud, contact GDT’s Cloud practice professionals at They’re composed of talented Cloud architects and engineers who have successfully deployed Cloud solutions from GDT premier partners AWS, Azure and Google Cloud. They’d love to hear from you.

FlashBlade™ ― an AI answer from a VIP provider

By Richard Arneson

If you are in any way connected to the IT industry, you can’t, and haven’t been able to for years, take a breath without stumbling across the word Flash. With apologies to the superhero created prior to World War II, flash was, as early as twenty (20) years ago, associated with Adobe Flash, the ubiquitous plug-in originally created by Macromedia that allows animations and interactive content to be incorporated into web browsers. Flash forward a few years and now that word is all about memory and storage. While flash storage was initially manufactured in 1992 by SanDisk, the technology didn’t truly sink its teeth into consumers until USB flash drives were introduced to the marketplace at the turn of the century (this century). Since those thumb drives were introduced, however, the word flash and how it’s referenced has come a long, long way.

(If you need a refresher on the relationship between flash memory and flash storage, check this out ― Flash, yes, but is it storage or memory?

Pure Storage―take a guess what they’re experts at?

Pure Storage, as its name implies, focuses on, and specializes in, one (1) hugely important segment of the industry―storage. Started just nine (9) years ago, Pure Storage is time and again voted a leader in its field. If you’re familiar with the Gartner Magic Quadrants, their analysis of solid-state arrays has listed Pure Storage within its coveted upper-righthand “Leader” quadrant in each of the last five (5) years. And if that’s not enough, they’re listed as the most northeastern company in the Leader quadrant. In other words, their “Ability to Execute” and “Completeness of Vision” places them firmly ahead of the other eleven (11) companies researched.

In the IT industry, being a jack of all trades and master of none―whether you’re an engineer, consultant, equipment manufacturer, et al.―can be a risky proposition. It’s possible (see Cisco, Dell EMC and HPE), but it’s far easier to take this approach if you’re in, well, another industry. Let’s face it; the IT industry is a far different animal. It encompasses so much information, thoughts, theories, research and technologies that attempting to master it all is like trying to sop up the Atlantic Ocean with a beach towel.

FlashBlade―another “flash” term you should learn

To dovetail with yesterday’s blog (Artificial Intelligence, Machine Learning and Deep Learning), FlashBlade is Pure Storage’s answer to the growing need for AI (Artificial Intelligence) and that technology’s ability to transform data into intelligence.

Earlier this year, Pure Storage joined forces with NVIDIA, the 20-year-old PC gaming company, to create what they’re calling AIRI, which stands for AI-Ready Infrastructure. Gaming aside, NVIDIA created the GPU (graphics processing unit), which has exponentially more processors per chip than CPUs. GPUs are optimized specifically for data computations, and they’re much smaller than a CPU, which means more of them can be jammed onto a single chip. And because AI, Machine Learning and Deep Learning must calculate computations with huge amounts of data, the GPUs can perform up to ten (10x) times better than their CPU counterparts.

The Pure Storage and NVIDIA AIRI is specifically built for deep learning environments and delivers a fully integrated platform that provides an out-of-the-box, scaled-out AI solution. The rack-scale architecture allows customers to add additional blades based on their specific AI needs, and to do so without any data migration or downtime.

Ultimately, AIRI was created to help customers more easily dip their toes into the AI waters with a low-latency, high-bandwidth, out-of-the-box solution, all in a compact, 4U form factor.

An even simpler solution…

The tenured, talented engineers and solutions architects at GDT are experienced at delivering advanced, cutting-edge solutions for enterprises, service providers and government agencies of all sizes. If you have questions about GDT premier partner Pure Storage and what their products and solutions can provide to your organization, contact them at or at They’d love to hear from you.

A-M-D-I-L-L: Unscrambled, these letters represent some of the hottest topics in the IT Industry  

By Richard Arneson

His name might not carry the same weight as Abner Doubleday’s, who is credited with inventing baseball in the early- to mid-1800’s, but Walter Camp is the person widely regarded as the creator of America’s most popular current sport―football. It’s impossible to know exactly what Camp envisioned for football, his amalgamation of soccer and rugby that he invented roughly fifty (50) years after Doubleday’s, but this much is certain―he never imagined it would be used as an analogy to describe Artificial Intelligence (AI), Machine Learning (ML) and Deep Learning (DL).

Artificial Intelligencethe game

Given the fact that it came before ML, which came before DL, AI, like football, has no predecessor.To borrow from mathematics, AI is the superset of subsets ML and DL. And like Camp’s invention, pinpointing the creation date of AI is next to impossible. While the use of the name Artificial Intelligence is widely attributed to John McCarthy, who used it during a Dartmouth academic conference in 1956, its actual invention is up for debate.

However, here’s what is widely agreed upon―AI sets out to utilize a machine to mimic human thinking. For years―decades in fact―the public’s understanding of AI took was largely the result of science fiction writers, who penned, among countless other sci-fi films, 2001: A Space Odyssey, Westworld and Bladerunner. Presently, AI is taking it on the chin because it’s feared that it will take jobs from people and that those smart devices are covertly gathering way, way too much information about its users.

Today AI is utilized by too many applications and appliances to name, but the most common are Netflix, Amazon’s Alexa, Apple’s Siri and Nest, the learning thermostat that Google purchased four (4) years ago. While some might argue that those hardly represent the benefits of AI, there are certainly examples of how it can deliver to humans a better qualify of life. For instance, there are new AI platforms capable of providing health advice, including specific diagnoses, to people who can’t afford medical care or access medical facilities.

Machine Learningthe players and the plays they run

Machine Learning (ML) takes AI to the next level. It’s not uncommon to hear ML and AI used interchangeably―they shouldn’t, they’re different. Football isn’t players, but the game in which they play. While AI addresses “If A happens, then B needs to happen”, ML, instead, determines that “If A happens, then I’ll learn what should happen next.” Yes, the machine, as the name suggests, thinks. In the case of, ML algorithms gather the type of movie, book or song that you enjoy, then look to see what others who share your same interests are into.

Deep Learningthe dekes, fakes and cuts

To remain with the football analogy, if AI is the game and ML represents the thinking players utilize to carry out the plays, DL is what allows a player to improvise in the event a defender stands between them and the goal line. DL attempts to enable machines to draw conclusions. Deep Learning is a type of Machine Learning, just its next evolution.

In the event this comes up in Trivial Pursuit, the word Deep in DL is borrowed from deep artificial neuron networks, which is another way of referencing DL. When, or if, you ever hear deep artificial neuron networks, you’ve just heard a synonym for DL. And in case you’re wondering, neuron refers to the interactions and interconnections that exist between the neurons in the human brain. Yes, the thinking human brain.

The best part about AI, ML and DL

Whether or not you realize it, you’re only a few clicks away from learning more about AI, ML and DL by accessing some of most talented and experienced solutions architects and engineers in the industry. GDT’s engineering and technical expertise has delivered solutions to companies of all sizes and from a wide variety of industries. In addition to enterprises, GDT lists as clients some of the most notable service providers and government agencies in the world. You can reach them at or at They’d love to hear from you.

What is FedRAMP, and why is it mandatory for federal agencies?

By Richard Arneson

Politically speaking, people want the government to intervene either more or less, but there’s something we can all agree on—FedRAMP is a good thing. FedRAMP is short for Federal Risk and Authorization Management Program, which is another way of saying Keeping federal agencies’ data safe when using cloud services. Now, instead of agencies deploying cloud applications and services willy-nilly (see unsecured), they can safely turn to a cloud services provider (CSP) that has earned FedRAMP accreditation. In addition to ensuring that agencies receive the highest levels of cloud security, it also enables them to save considerable time and money that they’d otherwise spend assessing providers. Here’s another thing we can all agree on―government waste is a bad thing. FedRAMP addresses that.

The FedRAMP certification process

Becoming FedRAMP-certified is not like getting a driver’s license, where a few classes are taken, a simple exam is passed, and a seal of approval stamped and the certification issued. Getting FedRAMP-certified is an extensive process, and it should be. Not to downplay the importance of enterprises’ mission critical information, but when it comes to government data, the safety of about 330,000,000 U.S. Citizens is at stake.

Even though FedRAMP was introduced over seven (7) years ago by the U.S. Office of Management and Budget, there are currently only about one hundred (100) providers that are FedRAMP-certified. Each are broken out into one (1) of three (3) service models: IaaS, PaaS and SaaS. A handful are certified in more than one (1) service model, and that list is primarily composed of a few companies with which we’re pretty familiar―Google, Microsoft, AWS (Amazon Web Services) and Salesforce.

Providers can get FedRAMP certified in one (1) of two (2) ways, either through a JAB (Joint Authorization Board) provisional authorization (P-ATO) or a select agency, known as Agency Authority to Operate (ATO).

Joint Authorization Board provisional authorization (JAB P-ATO)

The JAB includes representatives from the Department of Defense (DoD), the Department of Homeland Security (DHS) and the General Services Administration (GSA). Their vetting process is so extensive that they authorize only three (3) CSPs per quarter. First, however, the provider must prove that there has been a demonstrated demand for their service by a wide array of agencies. That initial hurdle knocks a huge percentage of applicants out of the running.

Extensive security assessments are conducted by the JAB, after which they conduct, with the applicant, a collaborative deep-dive into their cloud offerings, architecture, and capabilities (especially as it relates to security). A thorough Q&A session caps off the application process, after which the JAB makes their decision to grant, or not grant, FedRAMP authorization.

Agency Authority to Operate (ATO)

The FedRAMP authorization process has taken into consideration CSOs that have only a few agencies interested in their services, or if they have designed a cloud for a particular agency. In this case―and because it’s required that agencies only utilize FedRAMP-authorized providers―the provider would apply for certification through the ATO process. Basically, it allows for agencies to gain certification on an as-needed basis.

The ATO process requires that the CSP formalize their partnership with a particular government agency. First, however, their service must be fully built and functional. It’s up to the agency to analyze and approve the applicant’s SSP (System Security Plan), after which a Security Assessment Plan (SAP) needs to be developed with a 3PAO (3rd party assessment organization). 3PAOs are organizations selected by the U.S. government to evaluate agencies and test their SAP to ensure it is FedRAMP compliant.

Which certification process to choose?

JAB is good for providers offering services that can be utilized by multiple agencies. ATO best for those providers that have developed what can best be described as a niche offering. FedRAMP doesn’t want to exclude agencies from being able to access a particular service if it perfectly meets their needs. Hence, the ATO process. But regardless of which authorization process providers elect to choose (and it is up to them), the goals are the same―secure and diverse cloud services options for federal agencies.

Even if you’re not a government agency…

Utilizing a cloud service provider that is FedRAMP-certified provides organizations a peace of mind, whether they are a federal agency or not, in knowing that they’ve selected a company that has been carefully, and laboriously, vetted by the U.S. government. And that perfectly describes GDT, which has been FedRAMP-certified for years and secures the government cloud for agencies of all sizes. In addition, they provide cloud services for enterprises and service providers of all sizes, and from a variety of industries. You can contact GDT’s talented solutions architects and engineers at or at They’d love to hear from you.

Understanding the Attack Surface

By Richard Arneson

Leave it Hollywood to allow the smallest attack surface in history to be breached. In the first Star Wars movie, the Death Star, which appeared to be only slightly smaller than Earth, had a tiny aperture that, if penetrated, would magically destroy the entire, menacing orb. Naturally, it was hit―it’s Hollywood. Unfortunately, the attack surface of organizations, at least in terms of networking, is quite a bit larger, probably far more so than you’d think.

The Attack Surface

Attack Surface refers to the collective sum of all points of entry or interaction that are vulnerable to malware, worms, Trojans, hackers, you name it. Attack Surfaces encompass three (3) areas of vulnerability: the network, the applications that traverse it, and people, or employees, who happen to pose the greatest security threat to organizations.


The bad guys are looking for networks with multiple interfaces; the more the better. Take tunnels, for instance, which are constructed between communication points through data encapsulation―they can pose a huge threat to network security. For data transmission, Point-to-Protocol (PPP) and VPNs encapsulate non-routable data inside routable data. When data arrives at its intended destination, the outer packet is stripped off, which allows the inner data to enter the private network. Here’s one of the issues: it’s difficult to know exactly what has been encapsulated, which can inadvertently provide a protective shield for hackers. Talk to the folks at Home Depot or Target; they’ll tell you about VPN-related security vulnerabilities.

Any outward-facing, open ports (which means they’re open to receiving packets) can add to a network’s Attack Surface by revealing information about a particular system, even the network’s architecture. Open ports sound negligent, even irresponsible, but they’re necessary in certain situations. For instance, think back to when you set up your personal e-mail account and entered ingoing and outgoing port numbers. Those are open ports, but not adding, or opening, them means you can’t send or receive your emails. Yes, open ports are often needed, but can open the door to unseemly intentions.


Thanks to the rapid evolution of Cloud services, new applications to access it are being developed by the minute. Hackers, as well, are creating ways in which to access and exploit them…by the minute. The more code that is accessed and executed, the more code is exposed to users, including those of the unauthorized variety.

No question, cloud computing has greatly added to the complexity of securing vital data. The proliferation of applications requires commensurate security measures.

The Human Factor

As previously mentioned, employees, or authorized users, far and away produce the greatest security threats to organizations; they significantly expand the Attack Surface. Unauthorized applications are downloaded, emails from unknown senders are opened, and authorizations aren’t turned off after an employee leaves the company. And if they’re disgruntled ex-employees, the Attack Surface just got bigger. Even Instant Messaging programs can crack open a once, or believed to be, closed security door.

Attack Surface Questions? Turn to the Security Experts

Attack Surfaces, whether minimal or broad in scope, cost organizations worldwide over $2 trillion. Talking to the security experts at GDT should be your first order of business. Believing a security breach won’t happen to your company is setting you up for grave, and expensive, consequences in the future. From its state-of-the-art, 24x7x365 Security Operations Center (SOC), GDT’s security analysts and engineers manage and monitor network security for some of the most noted enterprises, service providers and government entities in the world. Contact them today at They’d love to hear from you.

HPE’s recent acquisition of Plexxi gives it a leg up on its composable competitors

By Richard Arneson

In May of this year, HPE announced its purchase of Plexxi, an eight-year-old, Boston-based company that set the IT world on fire based on this one (1) idea: data center networking needed to be less complicated, yet more powerful. They combined software-defined networking with intent-based automation that addressed workload and infrastructure awareness to revolutionize the way networks are managed. The result? Simplified tasks, increased efficiency, and reductions in complexity and costs.

With its purchase of Plexxi, HPE greatly enhanced its software-defined portfolio by combining Plexxi’s Next-Gen data center fabric with its existing software-defined infrastructure. HPE customers will be better equipped to enjoy a true cloud-like experience in their data center. Automatic creation and re-balancing of bandwidth will be able to perfectly address the needs of specific workloads, and applications can be deployed faster. Customers will be able to better, and faster, harness the true value of their data.

The two (2) Clear Values HPE is receiving as a result of its Plexxi acquisition

HPE is integrating Plexxi’s technology into its already robust hyperconverged solutions, which are the result, in part, of their 2017 purchase of SimpliVity. According to HPE, “The purchase of Plexxi will enable us to deliver the industry’s only hyperconverged offering that incorporates compute, storage and data fabric networking into a single solution, with a  single management interface and support.”

HPE anticipates two (2) key, clear opportunities from its Plexxi purchase:

  1. The combination of the Plaxxi and SimpliVity solutions and technologies will deliver to customers a dynamic, workload-based model that will much better align IT with their business goals. Prior to the Plaxxi acquisition, Gartner’s Magic Quadrant for hyperconvergence already listed HPE as one the industry’s leaders. With Plexxi, their lead just got longer.
  2. Secondly, Plexxi’s technology will enhance HPE Synergy, its existing composable infrastructure portfolio that offers pools of resources for storage and compute. HPE Synergy is built on HPE OneView, which enables users, from a single interface, to accelerate applications and service delivery, and allows logical infrastructures to be composed or recomposed at (near) instant speeds.

HPE, at last count, has almost 1,500 composable infrastructure customers to date. Now throw Plexxi into the mix, and that number will get bigger, in a hurry.

First, turn to the HPE and composable infrastructure experts at GDT

HPE is one of GDT’s premier partners, and their solutions and products have been architected, engineered, deployed and monitored by GDT for enterprises, government entities and some of the largest service providers in the world. GDT’s talented solutions architects and engineers are experts in delivering composable infrastructure solutions—including, of course, HPE Synergy—and helping organizations of all sizes enjoy its many benefits. You can contact them at or They’d love to hear from you.

When Containers need a conductor―that’s Orchestration    

By Richard Arneson

Containers, if you recall from last week’s blogs, pull from the application layer and package code and related application dependencies into one (1) neat, tidy package. Remember, this provides a step up from hypervisors, which require each VM to run their own OS, making them less efficient, especially when heavy scaling is required. There are other benefits of containers, of course, and you can refresh your memory here – VM, Hypervisor or Container?.

But the greatness of containerization―a fast, easy way to test and implement apps, address ever-fluctuating demands of users, quickly move apps between servers, et al.― can lead to management issues. The more containers that are created, the more inventory is created to maintain and manage. ZZ Top (3 members) doesn’t need a conductor, but when the New York Philharmonic (over a hundred) plays Beethoven’s 9th, it’s a must. And in the case of containerization, the conductor is called, appropriately, Orchestration.

Orchestration―making beautiful Container music

Orchestration software delivers a management platform for containers and helps define any relationships that exist between them. It can address containers’ need to scale, including how they talk to the world around them.

In short, Orchestration manages the creation, upgrading and availability of multiple containers, and controls connectivity between them. Entire container clusters can be treated as single deployments.

In addition, Orchestration provides:

  • A single, virtual host that can cluster multiple hosts together, all accessible through a single API.
  • Ease of host provisioning, and invalid nodes can be detected and automatically re-scheduled.
  • Linking of containers, including clusters maintained within containers.
  • The ability to control exactly when containers start and stop, and can group them into clusters, which can be formed for multiple containers that have common requirements. Clusters = easier management and monitoring.
  • The ability to easily handle processes related to an application, and included toolsets enable users to better steer deployments.
  • Automated updates, including the “health” of containers, and the ability to implement failover procedures.

We’re living in an Application-Centric world

Applications get larger and more complex with each passing day, but without containerization (and Orchestration), their need to work harmoniously is unwieldy, time-consuming, expensive and takes personnel off the key projects and initiatives that will keep their organization competitive in the marketplace. If there’s a need to develop, test and deploy sophisticated applications, Containers and Orchestration can help you play the right tune.

Turn to the engineers and solutions architects at GDT for more information about Containers and Orchestration

The talented technical professionals at GDT are experienced at helping customers enjoy the many benefits that Containers and Orchestration can deliver. They work with organizations of all sizes, and from a wide variety of industries, including government and service providers. They can be reached at or at They’d love to hear from you.

Virtual Machine or Container…or Hypervisor? Read this, and you can make the call

By Richard Arneson

Containers have been around for years, but we’ll leave its history for another blog. Hypervisors, if you recall, are software that manage virtual machines (VMs), each of which can run its own programs but gives the appearance of running the host hardware’s memory, processor and resources. Hypervisors are, basically, a platform for VMs. But don’t be surprised to hear hypervisor and VM used interchangeably; they shouldn’t be, but it’s not uncommon. Just remember―hypervisors are the software that run VMs.

They’re both Abstractions, but at different layers

Hypervisors (VMs)―physical layer

Abstractions relate to something that’s pulled, or extracted, from something else. Hypervisors abstract physical resources, such as those listed above (memory, processor, and other resources), from the host hardware. And those physical resources can be abstracted for each of the virtual machines. The hypervisor abstracts the resources at a physical level, capable of, as an example, turning a single server into many, thus allowing for multiple VMs to run off a single machine. VMs run their own OS and applications, which can take up loads of resources, even boot up slowly.

Containers―application layer

Containers are, again, an abstraction, but pull from the application layer, packaging code and related dependencies into one (1) happy family. What’s another word for this packaging? Yep, containerization.

What are the benefits of containers over VMs?

Application Development

There are several benefits related to containers, but we’ll start with the differentiator that provides the biggest bang for the buck. Prior to containers, software couldn’t be counted on to reliably run when moved to different computing environments. Let’s say DevOps wants to move an application to a test environment. It might work fine, but it’s not uncommon for it to work―here’s a technical term―squirrelly. Maybe tests are conducted on Red Hat and production will be on, say, Debian. Or both locations have different versions of Python. Yep, squirrelly results.

In short, containers make it far easier for software developers by enabling them to know their creations will run, regardless of where they’ve been deployed.


Containers take up far less space than VMs, which, again, run their own OS. In addition, containers can handle more applications and require fewer VMs. Make no mistake, VMs are great, but when heavy scaling is required, you may find yourself dedicating resources that are, basically, managing a spate of operating systems.

And consider moving workloads between vendors with VMs. It’s not as simple as dragging an application from one OS to the other. A vSphere-based VM can’t have associated workloads moved to, say, Hyper-V.


Microservices, which can run in containers, break down applications into smaller, bite-sized chunks. It allows different teams to easily work independently on different parts or aspects of an application. The result? Faster software development.

No, containers don’t mark the end of VMs and Hypervisors

In fact, containers and VMs don’t need to be mutually exclusive. VMs and containers can co-exist beautifully. As an example, a particular application may need to talk to a database on a VM. Containers can easily accommodate this particular scenario.

Sure, containers are efficient, self-contained systems that allow applications to run, regardless of where they’ve been deployed. But containers might not be the best option for all situations. And without expertise within IT departments to understand this difference, it will probably leave you wondering which―VMs or containers―will be the most beneficial to your organization. And, again, it might not be an either/or situation. For instance, as containers utilize one OS, it could, if you don’t have security expertise, leave you more open for security breaches than if utilizing VMs. Your best bet? Talk to experts like those as GDT.

Please, use your resources

You won’t find better networking resources than GDT’s talented solutions architects and engineers. They hold the highest technical certifications in the industry and have designed and implemented complex networking solutions for some of the largest enterprises and service providers in the world. They can be reached at or at They’d love to hear from you.

Shadow IT―you might be a participant and don’t even know it

By Richard Arneson

Everybody loves the cloud, and why wouldn’t they? The amount of innovation and productivity it has brought to businesses worldwide has been staggering. Where Salesforce once appeared to stand alone as the only cloud-based software service, it’s been joined over the past few years by thousands of applications that were once individually loaded on PCs (Office 365, the Adobe Creative Suite and WordPress come to mind). But with the good comes the bad―more accurately, the concerns―and, in the case of The Cloud, you can list issues related to security, governance and compliance as those that counterbalance the positive side of the Cloud ledger.

Shadow IT

Not to paint everybody with the same, broad brush stroke, but the preponderance of workers either have participated in Shadow IT, or continue to do so (it’s primarily the latter). Shadow IT refers to information technology that operates and is managed without the knowledge of the IT department―doesn’t sound very safe and secure, does it? Have you ever downloaded software that helps accomplish a task or goal without the knowledge of IT? Probably, right? That’s Shadow IT. But that’s not to say Shadow IT participants are operating with devious intentions; they do it for a variety of reasons, such as a need for expediency, or perhaps because corporate red tape, including required pre-requisites, preclude it. Participants’ goals―efficiency, productivity―may be noble and spot-on, but their actions can create a host of security headaches and issues at some point in the future. And there’s a very good chance it will. It’s estimated that within one (1) year, data breaches worldwide will cost organizations a collective $2.1 trillion. Oh, and the United States has the highest cost per breach ($7.9 million) in the world. Shadow IT helps buoy those numbers. Thinking a security issue only happens to the other guy is living in a fool’s paradise.

Cloud Access Security Brokers (CASB)

Sending out policies and conducting training for employees regarding computer and network use is great, and strongly encouraged, but counting on everybody adhering to these mandates is unreasonable and impractical, especially if your company has tens of thousands of workers scattered throughout the world.

To address the issue of Shadow IT, the industry has developed Cloud Access Security Brokers (no, they’re not people, but software), the name given by Gartner five (5) years ago that describes cloud security solutions centered around four (4) pillars: visibility, compliance, data security and threat protection. CASB is software planted between a company’s IT infrastructure and the cloud, and is now offered by several vendors, including Cisco―its CASB solution is called CloudLock (you can read about it here – Cisco CloudLock).

CASB utilizes an organization’s security policies to secure the flow of data to and from its IT infrastructure and the cloud. It encrypts data and protects it from malware attacks, provides encrypted data security, and helps defend protect against the scourge that is Shadow IT.

For more information…

With the help of its state-of-the-art Security Operations Center (SOC), GDT’s team of security professionals and analysts have been securing the networks of some of the most noteworthy enterprises and service providers in the world. They’re highly experienced at implementing, managing and monitoring Cisco security solutions. You can reach them at They’d love to hear from you.

What exactly is a Network Appliance?

By Richard Arneson

We work in an industry rife with nomenclature issues. For instance, Hybrid IT is often used interchangeably with Hybrid Cloud―it shouldn’t, they’re different. They were even referred to as such in an “also known as” manner within a beautiful, 4-color brochure produced by one of the leading equipment vendors in the IT industry. I’ve seen hyperconverged substituted for converged, SAN confused with NAS, SDN and SD-WAN listed as equivalents. The list is seemingly endless.

The good news? Getting the answer is pretty easy, and only a few clicks away. Yes, Google is, for most, the answer to getting correct answers. Ask it a question, then read through the spate of corresponding articles from reputable sources, and you can generally deduce the right answer. When ninety-eight (38) answers say it’s A, and one (1) claims it’s B―it’s probably A.

When does “it” become an Appliance?

Sitting in a non-company presentation recently, I heard the word appliance used several times, and, even though I’ve been in the IT and telecommunications industry for years, I realized I didn’t technically know what appliance meant, how it was different than other networking equipment. I turned to the person seated at my left and asked, “What’s the difference between an appliance and a piece of networking equipment, be it a router, server, etc.?” The answer he provided offered little help. As an attempt to hide my dissatisfaction, I quietly whispered the same question to an engineer on my right. His answer could be only slightly construed as similar to the first response―slightly. In fact, the only true commonality between the answers came in the form of two (2) words―single function. Clear as Mississippi mud pie, right? During a break, I asked the question of several in attendance, and got answers that ran a mile wide and an inch deep, but provided, essentially, little information, possibly less than before.

I turned to Google, of course. But I discovered something I didn’t believe was possible―there was literally no definition or information I could find that even attempted to distinguish what, exactly, makes for a network appliance. According to “my history” in Google Chrome, I typed in over thirty (30) variations of the same question. Nothing. Frustrating. But I had something better than Google.

It works with governmental elections

GDT has over two-hundred (200) solutions architects and engineers, all talented and tenured, and have earned, collectively, well over one thousand (1,000) of the industry’s highest certifications. Why not poll some of the industry’s best and brightest with the question,” What differentiates an ‘appliance’ from other networking equipment?”

They weren’t allowed to reply “TO ALL” in the hopes that others’ answers wouldn’t influence theirs. Also, they couldn’t Google the question, or any derivative thereof, which, based on my experience, wouldn’t have helped anyway.

Drum roll, please

Responses came pouring in, even though it was after 5 PM on a Friday afternoon. So in lieu of posting well over one hundred (100) responses, I decided to craft, based on those responses (one was even a haiku), a definition of a network appliance related to how it’s differentiated from a non-appliance. Here goes…

A network appliance is different than a non-appliance because it comes pre-configured and is built with a specific purpose in mind.

And because I’m a fan of analogies, here’s one I received:

“You can make toast in the oven, but you’ve got a toaster, a device that is specifically made for making toast. Because it’s designed for a narrow problem set, the toaster is smaller than the oven, more energy efficient, easier to operate, and cheaper. An appliance is something that is able to be better than a general-purpose tool because it does less.”

And for you Haiku fans:

“It is a server

Or a virtual machine

That runs services”

There it is―a definition, an analogy, even a Haiku. Now don’t get me started on the word device.

Turn, like I did, to the experts

GDT’s team of solutions architects and engineers maintain the highest certification levels in the industry. They’ve crafted, installed and currently manage the networks and security needs of some of the largest enterprises and service providers in the world. They can be reached at or at Great folks; they’d love to hear from you.

Riding the Hyperconvergence Rails

By Richard Arneson

If your organization isn’t on, or planning to get on, the road to hyperconvergence (HCI), you may soon be left waiving at your competitors as the HCI train flies by. A recent industry study found that approximately 25% of companies currently use hyperconvergence, and another 23% plan on moving to it by the end of this year. And those percentages are considerably higher in certain sectors, such as healthcare and government. In addition to the many benefits HCI delivers—software-defined storage (SDS), an easier way to launch new cloud services, modernization of application development and deployment, and far more flexibility for data centers and infrastructures—it is currently providing customers, according to the study, an average of 25% in OPEX savings. It might be time to step up to the ticket window.

All Aboard!

If you haven’t heard about Dell EMC’s VxRail appliances, it’s time you do―they’ve been around for about two (2) years now. In that first year alone, they sold in excess of 8,000 nodes to well over 1,000 customers. And in May of this year, they announced a significant upgrade to their HCI portfolio with the launch of more robust VxRail appliances, including significant upgrades to VxRack, its Software-Defined Data Center (SDDC) system. VxRail was closely developed with VMware, of which Dell EMC owns eighty percent (80%).

The VxRail Portfolio of Appliances

All VxRail appliances listed below offer easy configuration flexibility, including future-proof capacity and performance with NVMe cache drives, 25GbE connectivity, and NVIDIA P40 GPUs (graphics processing units). They’re all built on Dell EMC’s latest PowerEdge servers, which are powered by Intel Xeon Scalable processors, and are available in all-flash or hybrid configurations.

G Series―the G in G-Series stands for general, as in general purpose appliance. It can handle up to four (4) nodes in a 2U chassis.

E Series―whether deployed in the data center or at the edge (hence the letter E), the E Series sleek, low-profile can fit into a 1U chassis.

V Series―the V stands for video; it is VDI-optimized graphics ready and can support up to three (3) graphics accelerators to support high-end 2D or 3D visualization. The V Series appliance provides one (1) node in its 2U profile.

P Series―P for performance. Each P Series appliance is optimized for the heaviest of workloads (think databases). Its 2U profile offers one (1) node per chassis.

S Series―Storage is the operative word here, and the S Series appliance is perfect for storage dense applications, such as Microsoft Exchange or Sharepoint. And if big data and analytics are on your radar screen, the S Series appliance is the right one for you. Like the P and V Series appliances, the S Series provides one (1) node in its 2U profile.

And to help you determine which VxRail appliance is right for your organization, Dell EMC offers a nifty, simple-to-use XRail Right Sizer Tool.

Perfect for VMware Customers

VMware customers are already familiar with the vCenter Server, which provides a centralized management platform to manage VMware environments. All VxRail appliances can be managed through it, so there’s no need to learn a new management system.

Questions about Hyperconvergence or VxRail?

For more information about what hyperconvergence, including what Dell EMC’s VxRail appliances can provide for your organization, contact GDT’s solutions architects and engineers at They hold the highest technical certification levels in the industry, and have designed and implemented hyperconverged solutions, including ones utilizing GDT partner Dell EMC’s products and services, for some of the largest enterprises and service providers in the world. They’d love to hear from you.

When good fiber goes bad

By Richard Arneson

Fiber optics brings to mind a number of things, all of them great: speed, reliability, high bandwidth, long distance transmission, immune to electromagnetic interference (EMI), and strength and durability. Fiber optics is comprised of fine glass, which might not sound durable, but flip the words fiber and glass and you’ve got a different story.

Fiberglass, as the name not so subtly suggests, is made up of glass fibers―at least partially. It achieves its incredible strength once it is combined with plastic. Originally used as insulation, the fiberglass train gained considerable steam in the 1970’s after asbestos, which had been widely used for insulation for over fifty (50) years, was found to cause cancer. But that’s enough about insulation.

How Fiber goes bad

As is often the case with good things, fiber optics doesn’t last forever. Or, it should be said, it doesn’t perform ideally forever. There are several issues that prevent it from delivering its intended goals.


Data transmission over fiber optics involves shooting light between input and output locations, and if the light intensity degrades, or loses its power, it’s known as attenuation. High attenuation is bad; low is good. There’s actually a mathematical equation that calculates the degree of attenuation, and this sum of all losses can be caused by a degradation in the fiber itself, poor splice points, or any point or junction where it’s connected.


When you shine a flashlight, the beam of light disperses over distance. This is dispersion. It’s expected, usually needed, when using a flashlight, but not your friend when it occurs in fiber optics. In fiber, dispersion occurs as a result of distance; the farther it’s transmitted, the weaker, or more degraded, the signal becomes. It must propagate enough light to achieve the bare minimum required by the receiving electronics.


Signal loss or degradation can exist when there are microscopic variations in the fiber, which, well, scatters the light. Scattering can be caused by fluctuations in the fiber’s composition or density, and are most often due to issues in manufacturing.


When fiber optic cables are bent too much (and yes, there’s a mathematical formula for that), there can be a loss or degradation in data delivery. Bending can cause the light to be reflected at odd angles, and can be due to bending of the outer cladding (Macroscopic bending), or bending within it (Microscopic bending).

To the rescue―the Fiber Optic Characterization Study

Thankfully, determining the health of fiber optics doesn’t rely on a Plug it in and see if it works approach. It’s a good thing, considering there is an estimated 113,000 miles of fiber optic cable traversing the United States. And that number just represents “long haul” fiber, and doesn’t include fiber networks built within cities or metro areas.

Fiber Characterization studies determine the overall health of a fiber network. The study consists of a series of tests that ultimately determine if the fiber in question can deliver its targeted bandwidth. As part of the study, connectors are tested (which cause the vast majority of issues), and the types and degrees of signal loss are calculated, such as core asymmetry, polarization, insertion and optical return loss, backscattering, reflection and several types of dispersion.

As you probably guessed, Fiber Characterization studies aren’t conducted in-house, unless your house maintains the engineering skill sets and equipment to carry it out.

Questions about Fiber Characterization studies? Turn to the experts

Yes, fiber optics is glass, but that doesn’t mean it will last forever, even if it never tangles with its arch nemesis―the backhoe. If it’s buried underground, or is strung aerially, it does have a shelf life. And while its shelf life is far longer than its copper or coax counterparts, it will degrade, then fail, over time. Whether you’re a service provider or utilize your own enterprise fiber optic network, success relies on the three (3) D’s―dependable delivery of data. A Fiber Characterization Study will help you achieve those.

If you have questions about optical networking, including Fiber Characterization studies, contact The GDT Optical Transport Team at They’re highly experienced optical engineers and architects who support some of the largest enterprises and service providers in the world. They’d love to hear from you.

The Hyper in Hyperconvergence

By Richard Arneson

The word hyper probably brings to mind energy, and lots of it, possibly as it relates to a kid who paints on the dining room wall or breaks things, usually of value. But in the IT industry, hyper takes on an entirely different meaning, at least when combined with its compound counterpart―visor.

Hyperconvergence, in regards to data center infrastructures, is a step-up from convergence, and a stepping stone to composable. And, of course, convergence is an upgrade from traditional data center infrastructures, which are still widely used but eschew the use of, among other things, virtualization. Traditional data center infrastructures are heavily siloed, requiring separate skill sets in storage, networking, software, et al.

The Hypervisor―the engine that drives virtualization

Another compound word using hyper is what delivers the hyper in hyperconvergence ― hypervisor. In hyperconvergence, hypervisors manage virtual machines (VMs), each of which can run its own programs but gives the appearance of running the host hardware’s memory, processor and resources. The word hypervisor sounds like a tangible product, but it’s software, and is provided by, among others, market leaders VMware, Microsoft and Oracle. This hypervisor software is what allocates those resources, including memory and processor, to the VMs. Think of hypervisors as a platform for virtual machines.

Two (2) Types of Hypervisors

Hypervisors come in two (2) flavors, and deciding between either comes down to several issues, including compatibility with existing hardware, the level and type of management required, and performance that will satisfy your organization’s specific needs. Oh, and don’t forget budgetary considerations.

Bare-Metal – Type 1

Type 1 hypervisors are loaded directly onto hardware that doesn’t come pre-loaded with an Operating System. Type 1 hypervisors are the Operating System, and are more flexible, provide better performance and, as you may have guessed, are more expensive than their Type 2 counterparts. They’re usually single-purpose servers that become part of the resource pools that support multiple applications for virtual machines.

Hosted – Type 2

A Type 2 hypervisor runs as an application loaded in the Operating System already installed on the hardware. But because it’s loaded on top of the existing OS, it creates an additional layer of programming, or hardware abstraction, which is another way of saying less efficient.

So which Type will you need?

In the event you’re looking to move to a hyperconverged infrastructure, both the type of hypervisor, and from which partner’s products to choose, will generate a spate of elements to evaluate, such as the management tools you’ll need, which hypervisor will perform best based on your workloads, the level of scalability and availability you’ll require, and, of course, how much you’ll be able to afford.

It’s a big decision, so consulting with hyperconvergence experts should probably be your first order of business. The talented solutions architects and engineers at GDT have delivered hyperconvergence solutions to enterprises and service providers of all sizes. They’d love to hear from you, and can be reached at

How does IoT fit with SD-WAN?

By Richard Arneson

Now that computing has been truly pushed out to the edge, it brings up questions about how it will mesh with today’s networks. The answer? Very well, especially regarding SD-WAN.

IoT is comprised of three types of devices that make it work―sensors, gateways and the Cloud. No, smart phones aren’t one of the devices listed. In fact, and for simplicity’s sake, let’s not call smart phones devices. The technology sector is particularly adept at incorrectly utilizing words interchangeably. In this case, the confusing word is device. For instance, when you hear statistics about the estimated number of connected devices to be over 20 billion by 2020, smart phones are not part of that figure. While smart phones are often called devices and do have sensors that can detect tilt (gyroscope) and acceleration (accelerometer), IoT sensors extend beyond those devices (oops, I did it again; let’s call them pieces of equipment) that provide Internet connectivity―laptops, tablets and, yes, smart phones.

Sensors and Gateways and Clouds…oh my

Sensors are the edge devices, and can detect, among other things, temperature, pressure, water quality, existence of smoke or gas, et al. Think Ring Doorbell or Nest Thermostat.

The gateway can be either in hardware or software (sometimes both), and is used for the aggregation of connectivity, encryption and decryption of the IoT data.  Gateways translate protocols used in IoT sensors, including management, onboarding (storage and analytics) and edge computing. Gateways, as the name suggests, serve as a bridge between IoT devices, their associated protocols, such as Wi-Fi or Bluetooth, and the environment where the gathered data gets utilized.

SD-WAN and IoT

SD-WAN simplifies network management―period. And a subset of that simplicity comes in the form of visibility and predictability, which is exactly what IoT needs. SD-WAN can help ensure IoT devices in remote locations will get the bandwidth and security needed, which is especially important considering IoT devices don’t maintain a lot of computing power (for example, they usually don’t have enough to support Transport Layer Security (TLS)).

SD-WAN allows network managers the ability to segment traffic based on type―in this case, IoT―so device traffic can always be sent over the most optimal path. And SD-WAN traffic can be sent directly to a cloud services provider, such as AWS or Azure. In traditional architectures, such as MPLS, the traffic has to be backhauled to a data center, after which it is handed off to the Internet. Hello, latency―not good for IoT devices that need real-time access and updating.

SD-WAN is vendor-agnostic, and can run over virtually any existing topology, such as cellular, broadband and Wi-Fi, which makes it easier to connect devices in some of the more far-flung locations. And management can be accomplished through a central location, which makes it easier to integrate services across the IoT architecture of your choosing.

As mentioned earlier, there will be an estimated 20 billion IoT devices in use by 2020, up from 11 billion presently (by 2025…over 50 billion). The number of current endpoints being used is amazing, but the growth rate is truly staggering. And for IoT to deliver on its intended capabilities, it needs a network that can help it successfully deliver access to real-time data. That sounds like SD-WAN.

Here’s a great resource

To find out more about SD-WAN and exactly how it provides an ideal complement to IoT, contact GDT’s tenured SD-WAN engineers and solutions architects at They’ve implemented SD-WAN and IoT solutions for some of the largest enterprise networks and service providers in the world. They’d love to hear from you.

Unwrapping DevOps

By Richard Arneson

As the name suggests, DevOps is the shortened combination of two (2) words―development and operations. Originally, application development was time-consuming, fraught with errors and bugs, and, ultimately, resulted in the bane of the business world―slow to market.

Prior to DevOps, which addresses that slow to market issue, application developers worked in sequestered silos. They would collaborate with operations at a minimum, if at all. They’d gather requirements from operations, write huge chunks of code, then deliver their results weeks, maybe months, later.

They primary issue that can sabotage any relationship, whether personal or professional―is a lack of communication. Now sprinkle collaboration into the mix, and you have DevOps. It broke down communication and collaboration walls that still exist – if DevOps isn’t being utilized – between the two (2). The result? Faster time to market.

Off-Shoot of Agile Development

DevOps, which has been around for approximately ten (10) years, was borne out of Agile Development, created roughly ten (10) years prior to that. Agile Development is, simply, an approach to software development. Agile, as the name suggests, delivers the final project with more speed, or agility. It breaks down software development into smaller, more manageable chunks, and solicits feedback throughout the development process. As a result, application development became far more flexible and capable of responding to needs and changes much faster.

While many use Agile and DevOps interchangeably, they’re not the same

While Agile provides tremendous benefits as it relates to software development, it stops short of what DevOps provides. While DevOps can certainly utilize Agile methodologies, it doesn’t drop off the finished product, then quickly move on to the next one. Agile is a little like getting a custom-made device that solves some type of problem; DevOps will make the device, as well, but will also install it in the safest and most effective manner. In short, Agile is about developing applications―DevOps both develops and deploys it.

How does DevOps address Time to Market?

Prior to DevOps and Agile, application developers would deliver their release to operations, which would be responsible for testing the resultant software. And when testing isn’t conducted throughout the development process, operations is left with a very large application, often littered with issues and errors. Hundreds of thousands of lines of code that access multiple databases, networks and interfaces can require a tremendous amount of man hours to test, which in turn takes those man hours off other pressing projects―inefficient, wasteful. And often there was no single person or entity responsible for overseeing the entire project, and each department may have different success metrics. Going back to the relationship analogy, poor communication and collaboration means frustration and dissatisfaction for all parties involved. And with troubled relationships comes finger-pointing.


One of key elements of DevOps is its use of automation, which helps to deliver faster, more reliable deployments. Through the use of automation testing tools currently available, like Selenium, Test Studio and TestNG, to name a few, test cases can be constructed, then run while the application is being built. This reduces testing times exponentially and helps ensure each of the processes and features have been developed error free.

Automation is utilized for more than just testing, however. Workflows in development and deployment can be automated, enhancing collaboration and communication and, of course, shortening the delivery process. Production-ready environments that have already been tested can be continuously delivered. Real-time reporting can provide a window into any changes, or defects, that have taken place. And automated processes mean fewer mistakes due to human error.

Questions about what DevOps can deliver to your organization?

While DevOps isn’t a product, it’s certainly an integral component to consider when evaluating a Managed Services Provider (MSP). GDT’s DevOps professionals have time and again helped to provide and deploy customer solutions that have helped shorten the time to market they’ve needed to enjoy positive business outcomes. For more information about DevOps and the many benefits it can provide to organizations of all sizes, contact GDT’s talented, tenured solutions architects at They’d love to hear from you.

How do you secure a Cloud?

By Richard Arneson

Every organization has, has plans to, or wants to move to The Cloud. And by 2020, most will be there. According to a recent survey, within two (2) years 83% of enterprise workloads will be in The Cloud―41% on public Clouds, like AWS and Microsoft Azure, 20% will be private-Cloud based, and 22% as part of a hybrid architecture. With the amount of traffic currently accessing The Cloud, and considering the aforementioned survey figures, security will continue to be at the forefront of IT departments’ collective minds―as well it should.

With organizations selectively determining what will run in The Cloud, security can prove challenging. Now throw in DevOps’ ability to build and test Cloud apps easier and faster, and you’ve amped those Cloud security concerns significantly.

Security Solutions geared for The Cloud

To address the spate of Cloud-related security concerns, Cisco built an extensive portfolio of solutions, listed below, to secure customers’ Cloud environments, whether public, private, or a combination of both (hybrid).

Cisco Cloudlock

The Cloudlock DLP (Data Loss Prevention) technology doesn’t rest; it continuously monitors Cloud environments to detect sensitive information, then protect it. Cloudlock controls Cloud apps that connect to customers’ networks, enforces data security, provides risk profiles and enforces security policies.

Cisco Email Security

Cisco Email Security protects Cloud-hosted email, protecting organizations from threats and phishing attacks in the GSuite and in Office 365.

Cisco Stealthwatch Cloud

Stealthwatch Cloud detects abnormal behavior and threats, then quickly quells it before it evolves into a disastrous breach.

Cisco Umbrella

Cisco Umbrella provides user protection regardless of the type, or location, of Internet access. It utilizes deep threat intelligence to provide a safety net—OK, an umbrella—for users by preventing them access to malicious, online destinations, and thwarts any suspect callback activities.

Cisco SaaS Cloud Security

If users are off-network, anti-virus software is often the only protection available. Cisco’s AMP (Advanced Malware Protection) for Endpoints prevents threats at their point of entry, and continuously tracks each and every file that accesses those endpoints. AMP can uncover the most advanced of threats, including ransomware and file-less malware.

Cisco Hybrid Cloud Workload Protection

Cisco Tetration, which is their proprietary analytics system, provides workload protection for MultiCloud environments and data centers. It uses zero-trust segmentation, which enables users to quickly identify security threats and reduce their attack surface (all endpoints where threats can gain entry). It supports on-prem and public Cloud workloads, and is infrastructure-agnostic.

Cisco’s Next-Gen Cloud Firewalls

Cisco’s VPN capabilities and virtual Next-Gen Firewalls provide flexible deployment options, so protection can be administered exactly where and when it’s needed, whether on-prem or in the Cloud.

For more information…

With the help of its state-of-the-art Security Operations Center (SOC), GDT’s team of security professionals and analysts have been securing the networks of some of the most noteworthy enterprises and service providers in the world. They’re highly experienced at implementing, managing and monitoring Cisco security solutions. You can reach them at They’d love to hear from you.


Flash, yes, but is it storage or memory?

By Richard Arneson

We’ve all been pretty well trained to believe that, at least in the IT industry, anything defined or labeled as “flash” is a good thing. It conjures up thoughts of speed (“in a flash”), which is certainly one of most operative words in the industry―everybody wants “it” done faster. But the difference between flash memory and flash storage is often confused, as both not only store information, but are both referred to as Solid State Storage. For instance, a thumb drive utilizes flash memory, but is considered a storage device, right? And both are considered solid state storage devices, which means neither is mechanical, but electronic. Mechanical means moving parts, and moving parts means prone to failure from drops, bumps, shakes or rattles.

Flash Memory―short-term storage

Before getting into flash memory, just a quick refresher on what memory accomplishes. Memory can be viewed as short-term data storage, maintaining information that a piece of hardware is actively using. The more applications you’re running, the more memory is needed. It’s like a workbench, of sorts, and the larger its surface area, the more projects you can be working on at one time. When you’re done with a project, you can store it long-term (data storage), where it’s easily retrieved when needed.

Flash memory accomplishes its tasks in a non-volatile manner, meaning it doesn’t require power to function. It’s quickly accessible, smaller in size, and more durable than volatile memory, such as RAM (Random Access Memory), which requires the device to be powered on to access. And once it’s turned off, data in RAM is gone.

Flash Storage―storage for the long term

Much like a combustion engine, flash storage, the engine, needs flash memory, the fuel, to run. It’s nonvolatile (doesn’t require power), and utilizes one of two (2) types of flash memory―NAND or NOR.

NAND flash memory writes and reads data in blocks, while NOR does it in independent bytes. NOR flash is faster and more expensive, and better for processing small amounts of code―it’s often used in mobile phones. NAND flash is generally used for devices that need to upload and/or replace large files, such as photos, music or videos.

Confusion between flash storage and flash memory might be non-existent for some, maybe even most, but it’s astounding how much information either confuses the two (2) or does a poor job differentiating them.

Contact the Flash experts

For more information about flash storage, including all-flash arrays, which contain many flash memory drives and are ideal for large enterprise and data center solutions, contact the talented, tenured solutions architects and engineers at GDT. They’re experienced at designing and implementing storage solutions, whether on-prem or in the cloud, for enterprises of all sizes. You can reach them at

When considering an MSP, don’t forget these letters: ITSM and ITIL

By Richard Arneson

It’s not hard to find a Managed Services Provider (MSP); the hard part is finding the right one. Of course, there are many, many things to consider when evaluating MSPs, including the quality of its NOC and SOC (don’t forget the all-important SOC), the level of experienced professionals who manage and maintain it on a second-by-second basis, the length of time they’ve been providing managed services, the breadth and depth of their knowledge, and the range of customer sizes and industries they serve. But there’s something else that should be considered, and asked about, if you’re evaluating Managed Services Providers (MSP)―whether they utilize ITSM and ITIL methodologies.

ITSM (Information Technology Service Management)

ITSM is an approach for the design, delivery, management and overall improvement of an organization’s IT services. Quality ITSM delivers the right people, technology, processes and toolsets to address business objectives. If you currently manage IT services for your organization, you have, whether you know it or not, an ITSM strategy. Chances are that if you don’t know you have one, it might not be very effective, which could be one (1) of the reasons you’re evaluating MSPs.

Ensure the MSPs you’re evaluating staff their NOC and SOC with professionals who adhere to ITSM methodologies. If an ITSM is poorly constructed and doesn’t align with your company’s goals, it will negatively reflect on whether ITIL best practices can be achieved.

ITIL (Information Technology Infrastructure Library)

ITIL is a best practices framework that helps align IT with business needs. It outlines complete guidelines for five (5) key IT lifecycle service areas: Service Design, Service Strategy, Service Transition, Service Operations, and Continued Service Improvement. ITIL’s current version is 3 (V.3), so not only ensuring they follow ITIL methodologies is important, but make certain they’re well-versed in ITIL V.3., which addresses twenty-eight (28) different business processes that affect a company’s ITSM.

Here’s the difference in ITSM and ITIL that you need to remember

ITSM is how IT services are managed. ITIL is a best practices framework for ITSM. So, put simply, ITSM is what is what you do, and ITIL is how to do it. ITIL helps make sense of ITSM processes. ITIL isn’t the only certification of its type in the IT industry, but is undoubtedly the most widely used.

Without understanding the relationship between ITSM and ITIL, companies won’t gain business agility, operational transparency, and reductions in downtime and costs. And if your MSP doesn’t understand that relationship, they’re far less likely to deliver the aforementioned benefits.

For more info, turn to Managed Services Experts

Selecting an MSP is a big decision. Turning over the management of your network and security can be a make-or-break decision. Ensuring that they closely follow ITSM and ITIL methodologies is critically important.

For more information about ITSM and ITIL, contact the Managed Services professionals at GDT. They manage networks and security for some of the largest companies and service providers in the world from their state-of-the-art, 24x7x365 NOC and SOC. You can reach them at

The story of the first Composable Infrastructure

By Richard Arneson

In 2016, HPE introduced the first composable infrastructure solution to the marketplace. Actually, they didn’t just introduce the first solution, they created the market. HPE recognized, along with other vendors and customers, some of the limitations inherent in hyperconvergence, which provided enterprise data centers a cloud-like experience with on-premises infrastructures. But HPE was the first company to address these limitations, such as the requirement for separate silos for compute, storage and network. What this meant was that if there was a need to upgrade one of those silos, the others had to be upgraded, as well, even if it wasn’t needed. And hyperconvergence required multiple programming interfaces; with composable, a unified API can transform the entire infrastructure with a single line of code.

HPE Synergy

HPE Synergy was the very first “ground-up” built composable infrastructure platform, and is the very definition of HPE’s Idea Economy, which is a concept to address, in their words, the belief “that disruption is all around us, and the ability is needed to turn an idea into a new product or a new industry.”

HPE set out to address the elements that proved difficult, if not impossible, with traditional technology, such as the ability to:

  • Quickly deploy infrastructure through flexibility, scaling and updating
  • Run workloads anywhere, whether on physical or virtual servers…even in containers
  • Operate any workload without worrying about infrastructure resources or compatibility issues
  • Ensure the infrastructure can provide the right service levels to drive positive business outcomes


The foundation of HPE’s Composable Infrastructure is the HPE Synergy 12000 frame (ten (10) rack units (RU)), which combines compute, storage, network and management into a single infrastructure. The frame’s front module bays easily accommodate and integrate a broad array of compute and storage modules. There are two (2) bays for management, with the Synergy Composer loaded with HPE OneView software to compose storage, compute and network resources in customers’ configuration of choice. And OneView templates are provided for provisioning of each of the three (3) resources (compute, storage and network), and can monitor, flag, and remediate server issues based on the profiles associated with them.

Frames can be added as workloads increase, and a pair of Synergy Composer appliances can manage, with a single management domain, up to twenty-one (21) frames.

A Unified API

The Unified API allows users, through the Synergy Composer user interface, to access all management functions. It operates at a high abstraction level and makes actions repeatable, which greatly saves time and reduces errors. And remember, a single line of code can address compute, storage and network, which greatly streamlines and accelerates provisioning, and allows DevOps teams to work and develop more rapidly.


HPE Compute modules, which come in a wide variety based on types of workloads required, create a pool of flexible capacity that can be configured to rapidly―practically instantaneously―provision the infrastructure for a broad range of applications. All compute modules deliver high levels of performance, scalability, and simplified storage and configurations.


Composable storage with HPE Synergy is agile and flexible, and offers many options that can address a variety of storage needs, such as SAS, SFF, NVMe SFF, Flash uFF, or diskless.

Network (aka Fabric)

HPE Synergy Composable Fabric simplifies network connectivity by using disaggregation to create a cost-effective, highly available and scalable architecture. It creates pools of flexible capacity that provisions rapidly to address a broad range of applications. It’s enabled by HPE Virtual Connect, and can match workload performance needs with its low latency, multi-speed architecture. This one device can converge traffic across multiple frames (creating a rack scale architecture) and directly connects to external LANs.

Talk to the experts

For more information about HPE Synergy and what it can provide to your organization, contact the talented, tenured solutions architects and engineers at GDT. They’re experienced at designing and implementing composable and hyperconverged solutions for enterprises of all sizes. You can reach them at


Composable Infrastructure and Hyperconvergence…what’s the difference?

By Richard Arneson

You can’t flip through a trade pub for more than twenty (20) seconds without reading one of these two (2) words, probably both: composable and hyperconvergence. Actually, there’s an extremely good chance you’ll see them together, considering both provide many of the same benefits to enterprise data centers. But with similarities comes confusion, leaving some to wonder when, or why, should one be used instead of the other. To add fuel to those flames of confusion, hyperconvergence and composable can, and often are, used together, even complement each other quite well. But, if nothing else, keep one (1) primary thought in mind―composable is the evolutionary next step from hyperconvergence.

In the beginning…

Hyperconvergence revolutionized data centers by providing them a cloud-like experience with an on-premises infrastructure. Since its inception approximately six (6) years ago (its precise age is up for debate), the hyperconvergence market has grown to just north of $3.5B. Hyperconvergence reduces a rack of servers down to a small, 2U appliance, combining server, software-defined storage, and virtualization. Storage is handled with software to manage storage nodes, which can be either physical or virtual servers. Each node runs virtualization software identical to other nodes, allowing for a single, virtualized storage pool comprised of the combined nodes. It’s all software-managed, and is especially handy in the event of equipment, or node, failure.

However, Hyperconvergence, for all its benefits, has one (1) primary drawback―storage and compute must be scaled together, even if one or the other doesn’t need to be scaled at that very moment. For instance, if you need to add storage, you also have to add more compute and RAM. With composable infrastructures, you can add the needed resources independently of one another. In short, hyperconvergence doesn’t address as many workloads as composable infrastructure.

…then there was composable

Whomever coined the term Composable Infrastructure is up for debate, but HPE was definitely the first to deliver it to the marketplace with its introduction of HPE Synergy in 2016. Today there are many vendors, in addition to HPE, offering composable solutions, most notably Cisco’s UCS and Dell EMC’s VxBlock. And each of these aforementioned solutions satisfies the three (3) basic goals of composable infrastructures:

  • Software-Defined intelligence
    • Creates compute, storage and network connectivity from pooled resources to deploy VMs, on-demand servers and containers.
  • Access to a fluid pool of resources
    • Resources can be sent to support needs as they arise. The pools are like additional military troops that are deployed where and when they’re needed.
  • Management through a single, unified API
    • A unified API means the deployment of infrastructure and applications is faster and far easier; code can be written once that addresses compute, storage and network. Provisioning is streamlined and designed with software intelligence in mind.

Talk to the experts

For more information about hyperconverged or composable infrastructures, contact the talented, tenured solutions architects and engineers at GDT. They’re experienced at designing and implementing hyperconverged and composable solutions for enterprises of all sizes. You can reach them at


Intent-Based Networking (IBN) is all the buzz

You may or may not have heard of it, but if you fall into the latter, it won’t be long until you do―probably a lot. Network management has always been associated with several words, none of them very appealing to IT professionals: manual, time-consuming and tedious. An evolution is taking place to take those three (3) elements out of network management―Intent-Based Networking, or IBN.

It’s software

Some suggest that intent-based networking isn’t a product, but a concept or philosophy. Opinions aside, its nomenclature is confusing because “intent-based networking” doesn’t include an integral word―software.

Intent-based networking removes manual, error-prone network management and replaces it with automated processes that are guided by network intelligence, machine learning and integrated security. According to several studies regarding network management, it’s estimated that anywhere from 75% to 83% of network changes are currently conducted via CLI’s (Command Line Interfaces). What this ultimately means is that CLI-driven network changes, which are made manually, are prone to mistakes, the number of which depends on the user making the changes. And resultant network downtime from those errors means headaches, angry users and, worst of all, a loss of revenue. And if revenue generation is directly dependent on the network being up, millions of dollars will be lost, even if the network is down for a short period of time.

How does IBN work?

In the case of intent-based networking, the word intent simply means what the network “intends” to accomplish. It enables users to configure how, exactly, they intend the network to behave by applying policies that, through the use of automation and machine learning, can be pushed out to the entire infrastructure.

Wait a minute, IBN sounds like SDN

If you’re thinking this, you’re not the only one. They sound very similar, what with the ease of network management, central policy setting, use of automation, cost savings and agility. And to take that a step further, IBN can use SDN controllers and even augment SDN deployments. The main difference, however, lies in the fact that IBN is concerned more with building and operating networks that satisfy intent, rather than SDN’s focus on virtualization (creating a single, virtual network by combining hardware and software resources and functionality).

IBN―Interested in What is needed?

IBN first understands what the network is intended to accomplish, then calculates exactly how to do it. With apologies to SDN, IBN is simply smarter and more sophisticated. If it sounds like IBN is the next evolution of SDN, you’re right. While the degree or level of evolution might be widely argued, it would take Clarence Darrow to make a good case against evolution altogether. (Yes, I’m aware of the irony in this statement.)

Artificial Intelligence (AI) and Machine Learning

Through advancements in AI and algorithms used in machine learning, IBN enables network administrators to define a desired state of the network (intent), then rely on the software to implement infrastructure changes, configurations and security policies that will satisfy that intent.

Elements of IBN

According to Gartner, there are four (4) elements that define intent-based networking. And if they seem a lot like SDN, you’re right again. Basically, it’s only the first element that really distinguishes IBN from SDN.

  1. Translation and Validation– The end user inputs what is needed, the network configures how it will be accomplished, and validates whether the design and related configurations will work.
  2. Automated Implementation– Through network automation and/or orchestration, the appropriate network can be configured across the entire infrastructure.
  3. Awareness of Network State– The network is monitored in real-time, and is both protocol- and vendor-agnostic.
  4. Assurance and Dynamic Optimization/Remediation– Continuous, real-time validation of the network is taken, and corrective action can be administered, such as blocking traffic, modifying network capacity, or notifying network administrators that the intent isn’t being met.

IBN―Sure, it’s esoteric, but definitely not just a lot of hype

If you have questions about intent-based networking and what it can do for your organization, contact one of the networking professionals at GDT for more information. They’ve helped companies of all sizes, and from all industries, realize their digital transformation goals. You can reach there here: They’d love to hear from you.

Open and Software-Driven―it’s in Cisco’s DNA

Cisco’s Digital Network Architecture (DNA), announced to the marketplace approximately two (2) years ago, brings together all the elements of an organization’s digital transformation strategy: virtualization, analytics, automation, cloud and programmability. It’s an open, software-driven architecture that complements its data center-based Application-Centric Infrastructure (ACI) by extending that same policy-driven, software development approach throughout the entire network, including campuses and branches, be they wired or wireless. It’s delivered through the Cisco ONE™ Software family, which enables simplified software-based licensing and helps protect software investments.

What does all of that really mean?

With Cisco DNA, each network device is considered part of a unified fabric, which allows IT departments a simpler and more cost-effective means of really taking control of their network infrastructure. Now IT departments can react at machine speed to the quick changing of business needs, including security threats, across the entire network. Prior to Cisco DNA, reaction times relied on human-powered workflows, which ultimately meant making changes one device at a time. Now they can interact with the entire network through a single fabric, and, in the case of a cyber threat, they can address it in real-time.

With Cisco DNA, companies can address the entire network as one, single programmable platform. Ultimately, employees and customers will enjoy a highly enhanced user experience.

The latest buzz―Intent-based Networking

Cisco DNA is one of the company’s answers to the industry’s latest buzz phrase―Intent-based networking. In short, intent-based networking takes the network management of yore (manual, time-consuming and tedious) and automates those processes. It accomplishes this by taking deep intelligence and integrated security to deliver network-wide assurance.

Cisco DNA’s “five (5) Guiding Principles”:

  1. Virtualizeeverything. With Cisco DNA, companies can enjoy the freedom of choice to run any service, anywhere, and independent of underlying platforms, be they virtual, physical, on-prem or in the cloud.
  2. Automate for easy deployment, maintenance and management―a real game-changer.
  3. Provide Cloud-delivered Service Management that combines the agility of the cloud with security and the control of on-prem solutions.
  4. Make it open, extensible and programmable at every layer, with open APIs (Application Programming Interfaces) and a developer platform to support an extensive ecosystem of network-enabled applications.
  5. Deliver extensive Analytics, which provide thorough insights on the network, the IT infrastructure and the business.

Nimble, simple and network-wide―that’s GDT and Cisco DNA

If you haven’t heard of either intent-based networking or Cisco’s DNA, contact one of the networking professionals at GDT for more information. They’ve helped companies of all sizes, and from all industries, realize their digital transformation goals. You can reach them here: They’d love to hear from you.

SD-WAN: Demystifying Overlay, Underlay, Encapsulation & Network Virtualization

Following will be more details on the subject, but let’s just get this out of the way first: SD-WAN is a virtual, or overlay, network; the physical, or underlay, network is the one on which the overlay network resides. Virtual overlay networks contain nodes and links (virtual ones, of course) and allow new services to be enabled without re-configuring the entire network. They are secure and encrypted, and are independent of the underlay network, whether it’s MPLS, ATM, Wi-Fi, 4G, LTE, et al. SD-WAN is transport agnostic―no offense, but it simply doesn’t care about the means of transport you’ve selected.

While the oft-mentioned benefits of SD-WAN include cost savings, ease of management and the ability to prioritize traffic, they also provide many other less mentioned benefits, including:

  • The ability for developers to create and implement applications and protocols more easily in the cloud,
  • More flexibility for data routing through multi-path forwarding, and
  • The easy shifting of virtual machines (VMs) to different locations, but without the constraints of the physical, underlay network.

Overlay networks have been around for a while; in fact, the Internet is an overlay network that, originally, ran across the underlay Public Switched Telephone Network (PSTN). In fact, in 2018 most overlay networks, such as VoIP and VPNs, run atop the Internet.


According to Merriam-Webster, the word encapsulation means “to enclose in or as if in a capsule.” And that’s exactly what occurs in SD-WAN, except the enclosure isn’t a capsule, but a packet. The encapsulation occurs within the physical network, and once the primary packet reaches its destination, it’s opened to reveal the inner, or encapsulated, overlay network packet. If the receiver of the delivered information isn’t authenticated, they won’t be able to access it.

Network Virtualization

SD-WAN (including SDN) and Network Virtualization are often used interchangeably, but the former is really a subset of the latter. They both, through the use of software, connect virtual machines (VMs) that mimic physical hardware. And both allow IT managers to consolidate multiple physical networks, divide them into segments, and ultimately enjoy easier network management, automation, and improved speed.

Don’t leave your network to chance

WANs and LANs are the lifeblood of IT departments. If you’re considering SD-WAN and would like to enjoy the benefits it can, if deployed optimally, deliver, calling on experienced SD-WAN solutions architects and engineers should be your first order of business. Even though SD-WAN is widely touted as a simple, plug-n-play networking solution, there are many things to consider in addition to those wonderful benefits you’ve been hearing about for years. For instance, the use of multiple software layers can require more overhead, and the process of encapsulation can place additional demands on computing. Yes, there’s a lot to consider.

SD-WAN experts like those at GDT can help lead you down this critically important element of your digital transformation journey. They’ve done just that for enterprises of all size, and from a wide range of industries. You can reach their experienced SD-WAN solutions architects and engineers at They’d love to hear from you.

Dispelling myths about SD-WAN

Many of the misrepresentations of truth (OK, myths) that get bandied about regarding SD-WAN come from MPLS providers or network engineers who are happy with their current architecture and/or dread the thought of change. There’s no question, MPLS has been a great transport technology over the past fifteen (15) years or so, and its removal of Data Layer (OSI’s layer 2) dependency to provide QoS (Quality of Service) across the WAN was a considerable step up from legacy solutions, such as frame relay and ATM. And it’s still a great, and widely used, transport protocol, and can be effectively utilized with SD-WAN. So, let’s start with this first myth…

SD-WAN is a replacement for MPLS

No question, SD-WAN is perfect for replacing MPLS in certain instances, especially as it pertains to branch offices. MPLS isn’t cheap, and provisioning it at each location requires a level of on-site expertise. Now consider the associated costs and hassles when a company has hundreds of locations. However, given the stringent QoS demands that exist with many organizations, MPLS is still used to satisfy many of those, but can perfectly augment SD-WAN, as well. MPLS provides very high, and reliable, packet delivery, and many companies use it solely for traffic requiring QoS, and push everything else across the SD-WAN.

SD-WAN and WAN Optimization are the same thing

WAN Optimization was designed to address traffic traversing legacy networks, like frame relay and ATM. It was a way to squeeze the most of an existing network without having to expensively upgrade bandwidth at each site. Basically, the cost of bandwidth outgrew the need for more of it, and WAN Optimization, through caching and protocol optimization, allowed users to download cached information from a file that had already been downloaded―faster, more efficient use of bandwidth. But WAN Optimization can work in conjunction with SD-WAN, as it reduces latency across (very) long-distance WAN locations, satisfies certain QoS needs through data compression, and addresses TCP/IP protocol limitations.

SD-WAN is nothing more than a cost savings play

No question, SD-WAN is less costly than MPLS, and utilizes inexpensive, highly commoditized Internet connections. But there is a long list of reasons to utilize SD-WAN that go above and beyond savings. It’s far easier to deploy than MPLS and can be centrally-managed, which is ideal for setting policies, then pushing them out to all SD-WAN locations. SD-WAN works with the transport protocol of your choosing, whether that’s MPLS, 4G, Wi-Fi, and others. And there’s no longer a requirement to lease lines from only one (1) service provider, so customers can enjoy far greater flexibility and the ability to monitor circuits regardless of provider used.

SD-WAN requires a hybrid solution

Hybrid WANs, which utilize two (2) or more transport technologies across the WAN, are certainly not an SD-WAN requirement, but definitely work beautifully within that architecture. For instance, it’s not uncommon for organizations to utilize legacy networks for time-sensitive traffic, and SD-WAN for offloading certain applications to their corporate data center. A hybrid solution can allow for the seamless flow of traffic between locations so that, in the event one link experiences loss or latency, the other can instantly take over and meet associated SLAs.

Here’s one that’s NOT a myth: if you’d like to implement SD-WAN, you should turn to professionals who specialize in it

To enjoy all that SD-WAN offers, there are a spate of things to consider, from architectures and applications, to bandwidth requirements and traffic prioritization. SD-WAN is often referred to as a simple plug-n-play solution, but there’s more to it than meets the eye. Yes, it can be a brilliant WAN option, but not relying on experts in SD-WAN technology may soon leave you thinking, All that SD-WAN hype is just that…hype!

Working with SD-WAN experts like those at GDT can help bring the technology’s many benefits to your organization and leave you thinking, “It’s no hype…SD-WAN is awesome.” They’ve done just that for many enterprises―large, medium and small. You can reach their experienced SD-WAN solutions architects and engineers at They’d love to hear from you.

Flexible deployment to match unique architectural needs

In late 2017, tech giant VMware purchased VeloCloud, which further strengthened and enhanced its market-leading position transitioning enterprises to a more software-defined future. The acquisition greatly built on the success of its leading VMware NSX virtualization platform, and expanded its portfolio to address branch transformation, security, end-to-end automation and application continuity from the data center to cloud edge.

Referred to as NSX SD-WAN, VeloCloud’s solution allows for flexible deployment and secure connectivity that easily scales to meet the demands of enterprises of all sizes―and they know about “all sizes.” VMware provides compute, mobility, cloud networking and security offerings to over 500,000 customers throughout the world.

NSX SD-WAN satisfies the following key WAN needs:


From a central location, through a single pane-of-glass, enterprises of all sizes can build out branches in―literally―a matter of minutes, and set policies that are automatically pushed out to branch SD-WAN routers. Save the costs of sending out a CCIE to the branch office Timbuktu or Bugtussle, and use the savings on other initiatives.


With cloud applications, BYOD, and the need to utilize the cellular or broadband transport of users’ choosing, security is, as well it should be, of the utmost importance. The robust NSX SD-WAN architecture secures data and traffic through a secure overlay of the type of transport, regardless of the service provider. Best of all, it returns the ability to manage security, control and compliance from a central location.

Bandwidth Demands

With the growing―and growing―use of cloud applications, the need to utilize less expensive bandwidth is critically important. NSX SD-WAN can aggregate circuits to offer more bandwidth and deliver optimal cloud application performance.

Cloud Applications

If your employees aren’t currently spending an inordinate amount of time in the cloud, they will be. NSX SD-WAN provides direct access to the cloud, bypassing the need by MPLS networks to first backhaul traffic to a data center, then to the cloud. With that comes latency and a less than satisfying cloud experience.

NSX SD-WAN―Architecture friendly

When you’ve got over a half million customers around the world, it’s imperative to provide a solution that takes into account the many architectures that have been deployed. Regardless of the type of SD-WAN required―whether Internet-only or a Hybrid solution utilizing an existing MPLS network―NSX SD-WAN can satisfy the need.

GDT’s team of expert SD-WAN solutions architects and engineers have implemented SD-WANs for some of the largest enterprises and service providers in the world. For more information about what SD-WAN can provide for your organization, contact them at They’d love to hear from you.


How Companies are Benefiting from IT Staff Augmentation

By Richard Arneson

Companies have been augmenting their IT departments for years with professionals who can step in and make an immediate impact by utilizing their skill sets and empirical expertise. And it’s not limited to engineers or solutions architects. Project managers, high-level consultants, security analysts, DevOps professionals, cabling experts…the list is only limited by what falls within the purview of IT departments. It’s the perfect solution when a project or initiative has a finite timeline and requires a very particular level of expertise. And it can address a host of other benefits, as well, by providing:

Greater Flexibility

Change and evolving business needs go hand-in-hand with information technology. Now more than ever, IT departments are tasked with the need to create more agile, cutting edge business solutions, and their need to quickly adapt can be easily be a make-it-or-break-it proposition for companies. You might not have the time or money to quickly find those individuals who can help expedite your company’s competitive advantage(s) in the marketplace.

Cost Effectiveness

Bringing an IT professional onboard full-time to focus on a particular project can be cost prohibitive if you’re left wondering how they can be utilized once the project is completed. And, of course, there are the costs of benefits to consider, as well. According to the U.S. Department of Labor, benefits are worth about 30% of compensation packages.

Reduced Risk and More Control

Augmenting IT staff, rather than outsourcing an entire project, can not only help ensure the right skill sets are being utilized, but risk can be mitigated by maintaining oversight and control in-house.

Quicker, Easier Access to the right IT pro’s

Thankfully unemployment is lower than it’s been in years, and in the IT industry it’s less than half the national average. So quickly finding the right person with the perfect skill sets can seem harder than finding a needle in a haystack. Companies’ recruiting efforts don’t focus exclusively on IT; they’re filling jobs in finance, marketing, HR, manufacturing, et al. Turning to IT staff augmentation experts who maintain large networks of professionals can uncover the right personnel quickly.

An answer to Attrition

Remember that low jobless rate in the IT sector? Sure, it’s great news, but it also means there’s a lot of competition for the right resources. There will be attrition―it’s a given. And utilizing staff augmentation can help combat that by placing individuals on specific projects and initiatives for a designated period of time.

Call on the Experts

If you have questions about augmenting your IT staff with the best and brightest the industry has to offer, contact the GDT Staffing Services professionals at Some of the largest, most noteworthy companies in the world have turned to GDT so key initiatives can be matched with IT professionals who can help drive those projects to completion. They possess years of IT experience and expertise, and maintain a vast network of IT professionals who maintain the highest levels of certification in the industry. They’d love to hear from you.

The Plane Truth about SD-WAN

You can’t get more than a few words into any article, blog or brochure about SD-WAN without reading how the control and data planes are separated. For many, this might fall under the As long as it works, I don’t really care about it heading. And that’s evident based on a lot of the writing on the subject―it’s mentioned, but that’s about as far as the explanation goes. But the uncoupling of the control and data plane in SD-WAN is a fairly straightforward, easy to understand concept.

Control Plane comes first…

Often regarded as the brains of the network, the control plane is what controls the forwarding of information within the network. It controls routing protocols, load balancing, firewall configurations, et al., and determines the route data will take across the network.

…then Data Plane

The data plane forwards the traffic based on information it receives from the control plane. Think UPS. The control plane is dispatch telling the truck(s) where to go and exactly how to get there; the truck delivering the item(s) is the data plane.

So why is separating the control plane and data plane in SD-WAN a good thing?

In traditional WAN hardware, such as routers and switches, both the control plane and data plane are embedded into the equipment’s firmware. Setting up, or making changes to, a new location requires that the hardware be accessed and manually configured (see Cumbersome, Slow, Complicated). With SD-WAN, the de-coupled control plane is imbedded in software, so network management is far simpler and can be overseen and handled from a central location.

Here are a few more benefits that SD-WAN users are enjoying as a result of the separation of the Control and Data Planes:

  • Easier deployment; SD-WAN routers, once connected, are automatically authenticated and receive configuration information.
  • Real-time optimal traffic path detection and routing.
  • Traffic that’s sent directly to a cloud services provider, such as AWS or Azure, and not backhauled to a data center first, only then to be handed off to the Internet.
  • A significant reduction in bandwidth costs when compared to MPLS.
  • Network policies that no longer have to be set for each piece of equipment, but can be created once and pushed out to the entire network.
  • Greatly reduced provisioning time; a secondary Internet circuit is all that’s needed, so weeks spent awaiting the delivery of a new WAN circuit from a service provider is a thing of the past.
  • A Reduction of costs, headaches and hassles thanks to SD-WAN’s agnostic approach to access type and/or service provider.

Call on the SD-WAN experts

Enterprises and service providers are turning to SD-WAN for these, and many other, reasons, but there are a lot of architectures (overlay, in-net, hybrid) and SD-WAN providers from which to choose. And, like anything else regarding the health and well-being of your network, due diligence is of the utmost importance. That’s why enlisting the help and support of SD-WAN solutions architects and engineers will help ensure that you’ll be able to enjoy the most that SD-WAN can offer.

To find out more about SD-WAN and the many benefits it can provide your organization, contact GDT’s tenured SD-WAN engineers and solutions architects at They’ve implemented SD-WAN solutions for some of the largest enterprise networks and service providers in the world. They’d love to hear from you.

Cisco’s Power of v

In April of 2017, Cisco put both feet into the SD-WAN waters with their purchase of San Jose, Ca.-based Viptela, a privately held SD-WAN company. One of the biggest reasons for the acquisition was its ability to easily integrate Viptela software into Cisco’s platforms. Prior to the acquisition, Cisco’s SD-WAN solution utilized its own IWAN software, which delivered a somewhat complex, unwieldy option. The merger of IWAN and Viptela formed what is now called, not surprisingly, Cisco SD-WAN.

Questions concerning the agility and effectiveness of Cisco SD-WAN can best be answered from the following quote published by Cisco customer Agilent Technologies, a manufacturer of laboratory instruments:

“Agilent’s global rollout of Cisco SD-WAN enables our IT teams to respond rapidly to changing business requirements. We now achieve more than 80% improvement in turnaround times for new capability and a significant increase in application reliability and user experience.”

The following four (4) “v” components are what comprise Cisco’s innovative SD-WAN solution.

Controller (vSmart)

What separates SD-WAN from those WAN technologies of the past is its decoupling of the Data Plane, which carries the traffic, from the Control Plane, which directs it. With decoupling, the controls are no longer maintained in equipment’s firmware, but in software that can be centrally managed. Cisco’s SD-WAN controller is called vSmart, which is cloud-based and uses Overlay Management Protocol (OMP) to manage control and data policies.

vEdge routers

Cisco’s SD-WAN routers are called vEdge, and receive data and control policies from the vSmart controller. They can establish secure IPSec tunnels between other vEdge routers, and can be either on-prem or installed on private or public clouds. They can run  traditional routing protocols, such as OSPF or BGP, to satisfy LAN needs on one side, WAN on the other.

vBond―the glue that holds it together

vBond is what connects and creates those secure IPSec tunnels between vEdge routers, after which key intel, such as IP addressing, is communicated to vSmart and vManage.


Managing the WAN traffic from a centralized location is what makes SD-WAN, well…SD-WAN. vManage provides that dashboard through a fully manageable, graphical interface from which policies and communications rules can be monitored and managed for the entire network. Different topologies can be designed and implemented through vManage, whether it’s hub and spoke, spoke to spoke, or to address specific needs to accommodate different access types.

To enjoy the Power of v, contact the experts at GDT

GDT has been a preferred Cisco partner for over 20 years, and its expert SD-WAN solutions architects and engineers have implemented SD-WANs for some of the largest enterprises and service providers in the world. Contact them at They’d love to hear from you.

SDN and SD-WAN: A Father & Son Story

SD-WAN (software-defined WAN) has been all the rage for a few years now, coming to the rescue of enterprises that had spent considerable chunks of their IT budgets on MPLS to connect offices scattered through the world. But it’s not to be confused with SDN (software-defined networking), which, even though they both share “software-defined” in their titles, is different. Think of SDN as the parent technology, and SD-WAN as its up-and-coming son. Yes, they’re similar, but different.

The root of their common name

The sharing of SDN and SD-WAN nomenclature is due to the separation of their Control and Data Planes, which makes them, along with many other benefits, easier to deploy and manage. With both SDN and SD-WAN, the Control Plane, which directs traffic, isn’t in the equipment’s firmware, but in software, which allows for ease of management from a central location. Without that separation, equipment must be accessed and manually configured for each location. And to do that, a level of technical expertise is needed, so thoughts of having an office manager try and configure a router is, well… Let’s just say it’s not going to happen. Flights and hotel stays ensue, so the travel costs alone for implementing an MPLS network with dozens of branch locations are exorbitant. Now add in the high costs of MPLS circuits and the long wait times for provisioning, and you’re looking at an expensive, time insensitive wide area network.

Different career paths

As is the case with many fathers and sons, SDN and SD-WAN have chosen a different career path. Each has its own specialty: SDN for local area networks, data centers and service providers’ core networks, and SD-WAN to augment, or replace, MPLS-based wide area networks. Through Network Function Virtualization (NFV), SDN can be configured and programmed by the customer through software that was once held in closed, proprietary systems. SDN allows organizations to quickly and easily (and without disruption) adapt to ever-changing compute, storage and networking needs.


There’s no question, the “cost savings” label is bestowed up SD-WAN more than SDN. As mentioned earlier, the savings to connect branch offices with SD-WAN are considerable when compared to MPLS. While a secondary Internet connection is needed, the low-cost, commoditized price of broadband is significantly less expensive than MPLS circuits. And it provides a lot more than cost savings. SD-WAN routers can bring locations online in a matter of minutes, as authentication and configuration is automated. It deftly steers traffic around network bottlenecks, and can be prioritized so latency-sensitive, high bandwidth applications can traverse accommodating network paths. And SD-WAN is carrier and transport agnostic, so different service providers can be selected by location, and traffic can be carried by the transport protocol of choice, whether 4G, Wi-Fi, even MPLS.

Call on the experts

While the benefits, and reasons, to move to SDN or SD-WAN are compelling, there are several issues and elements to consider prior to implementing either. That’s why it’s best to consult with software-defined solutions architects and engineers like those at GDT. They’re experienced at deploying cutting-edge, innovative solutions for some of the largest enterprise and service provider networks in the world. Contact them at They’d love to hear from you.

What is Digital Transformation?

We’ve all heard of it; we know our company should be striving to achieve it; but what exactly is…digital transformation?

Many people, at least those outside of the IT and telecommunications industries, may have been first introduced to the digital world through clocks or CD’s, leaving them with the question, “Haven’t we been digitally transformed for years?” Well, yes, in a sense, but when digital is used with transformation, it means something altogether different. In the simplest of definitions, digital transformation refers to how companies utilize technology to change:

  • the way their business operates,
  • how they engage their customers, and
  • how they become more competitive, and profitable, as a result.

This transformation accelerates positive change across all departments and provides, if done correctly, agility, efficiencies, innovation, and key analytics to help companies make more educated business decisions.

Becoming more competitive

Whether or not a company has a digital transformation strategy, they can be certain of one thing―their competitors do. Creating and implementing one is not easy, especially for companies who’ve enjoyed long term success. Here’s why: it requires them to do what will probably be very uncomfortable, even unconscionable―re-think processes and procedures that may have been in place and successful for decades, and even be prepared to scrap them, if necessary.

Digital transformation is somewhat like human factors engineering (AKA ergonomics), which forces companies to better understand, even feel, that end user experience. Companies need to, as author Steven Covey wrote in his book The 7 Habits of Highly Effective People, begin with the end in mind. They need to imagine how they’d like to engage customers, keep them engaged, and monetize that user experience. From there, they can begin to reverse engineer what it will take to get there (yes, that’s where it gets really challenging).

The move toward edge devices

Edge devices, of course, refer to the point at which a network is accessed. Ask a 60-year-old network engineer what he considers to be an edge device, and he’ll probably list routers, switches, multiplexers, et al.—all of the equipment that provides access to LANs (token ring, ethernet) and WANs, which support a wide array of technologies, such as frame relay, ISDN, ATM and MPLS. Lower that age group and respondents will probably think IoT, then list off smart phones, tablets and sensors, such as doorbells, thermostats and security systems—basically, anything that runs iOS, Android or Linux, and has an IP address.

So how are these edge devices an integral component of digital transformation? Well, they represent the sundry ways customers can enjoy an enhanced end user experience. And while customers are enjoying that better experience, the company, in turn, is accessing vital information and key analytics to help them make more impactful and better-educated business decisions. The result? Enhanced, targeted marketing, happier and more well-informed customers, operational efficiencies enjoyed by multiple departments and business units, and, of course, higher revenue.

Digital Transformation

The next time somebody asks you about digital transformation or what it means, you’ll know what to say in under twenty-five (25) words: “Digital transformation is the utilization of technology to enhance the end user experience, transform business processes and greatly advance value propositions.”

For more information about how your organization can develop or enhance its digital transformation journey, call on the expert solutions architects and engineers at GDT. For years they’ve been helping customers of all sizes, and from all industries, realize their digital transformation goals. Contact them at They’d love to hear from you.

Enjoy the Savings (including those of the soft variety) with SD-WAN

Sure, there are many, many benefits of utilizing SD-WAN that go well beyond cost savings, but the dollar signs tend to get the most press (big surprise). But savings aren’t limited to costs reflected solely within IT budget line items―they stretch far and wide, and include, as a byproduct, many soft cost savings that organizations of all sizes, and from all industries, are currently enjoying with SD-WAN.

Hard Cost Savings


Hard cost savings are certainly the easiest to calculate; they’re the ones reflected in the lower bills you’ll receive from your MPLS provider, like AT&T, CenturyLink, Charter Spectrum, et al. Connecting branch offices with MPLS isn’t cheap, and provisioning them can also be expensive in terms of time. New circuits or upgrades can easily take weeks to accomplish, and who has time for that? Sure, MPLS offer excellent QoS (quality of service) and is a very stable, reliable technology, but SD-WAN has come a long, long way to address requirements like QoS. And if offices, especially those of the smaller, remote variety, aren’t running real-time applications and are accessing them via the Cloud, SD-WAN is ideal.

For SD-WAN, another Internet circuit is needed to run as a companion to your existing one. And if you haven’t noticed, the cost for dedicated, high bandwidth Internet circuits is crazy inexpensive, especially when compared to an MPLS circuit that delivers comparable bandwidth. Neither Internet connection stands by idle, and both are hard at work to satisfy your networking needs. SD-WAN automatically looks for, and steers your traffic around, bottlenecks in the network that could cause jitter, latency and, of course, dropped pockets.


Having a dedicated router at a branch office might make more sense from a cost standpoint if it’s supporting dozens, even hundreds, of employees, but it becomes more and more cost-prohibitive as those numbers go down. Moving to a SaaS (software-as-a-service) model means getting away from upfront, capital expenditures and moving them to a more budget-friendly, pay-as-you-go cloud model. That’s not to say that new hardware doesn’t need to be deployed for SD-WAN, but SD-WAN routers are highly flexible, simpler (they include traditional routing and firewall capabilities) and less expensive than a traditional router. Oh, and they’re much smaller―for instance, Viptela’s SD-WAN vEdge routers are all 1RU (less than 2” tall). Also, they’re compatible with traditional routers, so there’s no need to yank them out and set them next to the dumpster just yet.

The Harder-to-Calculate Soft Cost Savings

Soft costs are often overlooked, primarily because they’re harder to calculate. But there’s no question that SD-WAN, if implemented correctly, can result in a lot of soft costs (like those listed below) that should definitely be calculated and taken into consideration.


Consider the productivity your organization can lose waiting for an MPLS circuit to be delivered or upgraded. And there’s also the potential for network downtime due to the provisioning of an MPLS circuit, which is a very real possibility. And troubleshooting those circuits, whether they’re new or are experiencing issues, takes time―often lots of it.

Travel Costs

With SD-WAN, the days are gone when a member of your IT staff has to travel to a branch location to install and configure a router. SD-WAN allows new sites to quickly and easily be turned up, and done so within a matter of minutes.


With a secondary Internet circuit installed, SD-WAN can easily and automatically re-route traffic in the event one (1) of the circuits goes down. With MPLS, cloud-based applications are usually backhauled directly to the data center first, after which they’re handed off to the Internet. This can add latency and reduce performance. Not so with SD-WAN.


SD-WAN is carrier neutral, and can be utilized by the transport protocol of your choosing, whether 4G, MPLS, Wi-Fi, etc. And you don’t have to worry about securing a circuit from only one (1) service provider, which provides far greater flexibility. And SD-WAN provides the ability to monitor all circuits, regardless of service provider.

Got questions? GDT’s expert SD-WAN network architects have answers

The SD-WAN experts at GDT have implemented SD-WAN solutions for organizations of all sizes. They know how to implement a solution that not only provides savings, both hard and soft, but delivers the many benefits SD-WAN can provide. Contact them at They’d love to hear from you.

Who doesn’t want a turnkey, integrated backup solution?

That’s exactly what you’ll get with Dell EMC’s Integrated Data Protection Appliance

Two words: data protection. There probably isn’t a more important combination of words in the IT industry. Obviously, Dell EMC agrees―their latest IDPA (integrated data protection appliance) is a turnkey, pre-integrated appliance that brings together protection storage, search and analytics, and across a wide array of applications and platforms.  And with Dell EMC’s new capabilities that address cloud data protection, critical information can be backed up from anywhere in the world, and at any time.

Listening to the marketplace

Through, primarily, empirical knowledge, Dell EMC designed their DP4400 to address what the marketplace needed: simplicity. And that’s exactly what it is―the DP4400 is a single, stand-alone, 2U appliance that not only provides considerable turnkey (and easily upgradeable) storage, but is also very affordable when compared to competing products.

Cloud Ready

Cloud features are built into the DP4400, and no cloud gateways are needed. It not only provides data protection, but natively extends the same level of protection to the cloud. Cloud Disaster Recovery (DR) and Long-Term Retention (LTR) are built into the DP4400, and add-ons are not only easy to deploy, but scalable.

And speaking of LTR, Dell EMC guarantees 55:1 deduplication to a private, public or hybrid cloud, and the DP4400 affords for the management of up to 14 petabytes (PB) of capacity (yep, that’s on a single DP4400). What does this really mean? It means that managing virtual or physical tape libraries is a thing of the past.

VMware (a Dell EMC company)

The DP4400 is optimized for VMware, which Dell picked up as part of its purchase of EMC in 2016. Automation is provided across the entire VMware data protection stack, including VM backup policies and automation.

And there’s more…

The DP4400 is:

  • Customer-installable/upgradable, and a 2U appliance (ah, that means it’s small―3 ½ inches tall),
  • “Grow-in-place” rich (24-96TB), and requires no additional hardware,
  • Capable of providing up to 2x shorter backups,
  • Requires up to 98% less bandwidth, and
  • Comes with a 3-year satisfaction guarantee and up to 55:1 data protection deduplication guarantee through Dell EMC’s Future-Proof Loyalty Program.

This only scratches the surface of what Dell EMC’s DP4400 IDPA can bring to your organization. For more information, contact

Enjoy on-prem benefits with a public cloud experience

If you listen closely, you can practically hear IT professionals the world over asking themselves the same question―“If I utilize the public cloud, how can I maintain control and enjoy the security I get from on-premises infrastructures?” And if that question does indeed steer them away from cloud services, they’re left with the ongoing, uneasy feeling that comes from overprovisioning capacity and long-awaited circuit upgrades.

HPE has the answer to this IT conundrum

HPE GreenLake Flex Capacity is a hybrid cloud solution that provides customers with a public cloud experience and the peace of mind that can come with on-premises deployments. Like cloud services, HPE GreenLake Flex Capacity is a pay-as-you-go solution that offers capacity on-demand and quickly scales to meet growth needs, but without the (long) wait times associated with circuit provisioning.

And with HPE GreenLake Flex Capacity, network management is greatly simplified, as customers can manage all cloud resources in the environment of their choosing.

HPE GreenLake Flex Cap’s many benefits include…

  • Limitation of risk (and wracked nerves) by maintaining certain workloads on-prem
  • Better alignment of cash flows with your business due to no upfront costs and a pay-as-you-go model
  • No more wasteful circuit overprovisioning
  • Rapid scaling, which provides an ability to immediately address changing network needs
  • Receipt of real-time failure alerts and remediation recommendations that provide vital, up-to-date information
  • Ability to right-size capacity

And combined with HPE Pointnext…

HPE GreenLake Flex Cap delivers availability, reliability and optimization, and lets customers’ IT professionals concentrate on the initiatives and projects that will help shape their company’s future. And HPE’s services organization, Pointnext, can not only monitor and manage the entire solution, but provides a customer portal that delivers key analytics, including detailed consumption metrics.

 Questions? Call on the experts

If you have additional questions or need more information about HPE GreenLake Flex Capacity and the many benefits it can provide your IT organization, contact Pam Bull, GDT’s HPE point of contact, at She’d love to hear from you.





How SD-WAN can enhance application performance

Remember the days when a new software application meant downloads, licenses, and minimum RAM and processing power requirements? Or when applications resided in a corporate data center and were accessed over expensive, leased lines from service providers, only then to be handed off to the Internet? Expensive, inefficient, and prone to latency―not a good networking triad. And direct Internet access can be fraught with issues, as well, leaving end users with unpredictable, inconsistent application performance and a spate of trouble tickets left in their wake.

Hello SD-WAN―a friend to the application. While content is king in the marketing world, applications enjoy a similar, regal role in the business world. It’s estimated that each worker uses between 5.5 and 8 different computer-based applications each day, and another 7 to 10 of the mobile variety. An inability to access any one of them can quickly derail your, and your company’s, day. Here are the many ways SD-WAN can enhance your organization’s mission critical applications:

Sidestep the bottlenecks

SD-WAN is similar to traffic reports on drivetime radio, only better―much better. Imagine that your car hears the traffic report, then automatically steers you around the construction without you even having any knowledge that any traffic snarls existed. SD-WAN is similar and continually searches for bottlenecks in the network (packet drop, jitter and latency), after which the best, least congested route is selected.

Prioritize traffic by application

In SD-WAN, policies can be set up so certain applications traverse select network paths with less latency and greater bandwidth. And, conversely, lower priority traffic, such as backups or Internet browsing, can be delivered via less expensive and/or less reliable connections.

Fast access

With SD-WAN, new sites can be turned up in a matter of minutes, enabling users quick access to applications. When an SD-WAN edge appliance is plugged in, it automatically connects, authenticates and receives configuration information.

Centralized policy management

Priorities can be centrally managed for each application based on any number of policies, such as QoS, reliability, security and visibility. Also, this prioritization can be designated by users, dates, times or office locations.

SLA adherence

With SD-WAN, companies can set up policies per application, including respective SLA criteria (packet loss, jitter, latency), so particular applications are only directed over the connections that meet the SLA requirements. And if that connection goes down, the traffic can be re-routed to meet SLAs, even if it means being routed over a broadband or MPLS link.

It’s Transport―and carrier―agnostic

Because SD-WAN is a virtual WAN, it can be utilized by the transport protocol of your choosing, such as MPLS, 4G, Wi-Fi, et al. And there’s no longer a need to lease lines from only one (1) service provider, which provides customers far greater flexibility, including the ability to monitor circuits regardless of the service provider.

Before you go all in on SD-WAN…

…engage the GDT SD-WAN expert solutions architects and engineers at They’re experienced at providing SD-WAN solutions for companies of all sizes.

Is SD-WAN the same as WAN Optimization?

Aside from the list of positives you’ve likely heard about SD-WAN (and there are many), there’s one thing it isn’t―WAN Optimization. Many incorrectly use SD-WAN and WAN Optimization interchangeably. That isn’t to say SD-WAN doesn’t greatly optimize networks, just that it’s not technically WAN Optimization, which was introduced roughly fifteen (15) years ago when WAN circuits were, well, pricey.

WAN Optimization refers to techniques and technologies that enable data traversing the network to get maximized, which allows, basically, companies to get the most out of their legacy networks that still utilize WAN connections from telco providers, such as AT&T, Charter Spectrum, Level 3, and the like. Fifteen (15) years ago WAN Optimization was all the rage. Bandwidth requirements outgrew many of the IT budgets companies set aside to upgrade WAN connections, so WAN Optimization was the answer. Through caching and protocol optimization, end users could download cached information from a file that had already been downloaded. In short, it squeezed as much bandwidth juice from the WAN as possible.

It worked well for some traffic, but not all, and required dedicated hardware at headquarters and each remote location (then came the management and maintenance…). But bandwidth costs began to drop―precipitously―and having Gig connections became both commonplace and affordable.

Sounds like the death of WAN Optimization, right?

Not so fast. If you surmised that cheaper, commoditized bandwidth and SD-WAN teamed up to toss WAN Optimization onto the scrapheap, you’ve surmised incorrectly. No question, the wallet-friendly cost of broadband and, of course, SD-WAN have reduced the desire for WAN Optimization, but not the need for it. WAN Optimization can serve as an impactful supplement to SD-WAN, and can allow you to make the most out of your infrastructure by:

  • Reducing latency as a result of very wide area networks, meaning those that span long distances.
  • Compressing data to address TCP/IP protocol limitations and satisfy stringent QoS requirements.
  • Addressing congestion due to limited bandwidth, which can limit SD-WAN’s ability to more quickly re-route traffic.
  • Handling slower, chattier protocols more efficiently.

Call on the experts

If you have questions about how SD-WAN can be utilized to bring its many benefits to your organization, like enhanced application performance, less complexity, greater flexibility and reduced network costs, contact GDT’s team of experienced SD-WAN solutions architects and engineers at They’d love to hear from you.

Cisco HyperFlex runs point on customers’ hyperconverged journeys

The term hyperconvergence has been getting a lot of press in the last few years, and rightly so. It provides pretty much everything that legacy IT infrastructures don’t―flexibility, scalability and simplicity. It enables, in a single system, the management of equipment to handle a wide range of workloads, such as database management, collaboration, packaged software, such as SAP and Oracle, virtual desktop, analytics, web servers, and more. It’s software-defined, which is another way of saying quicker network provisioning, more control and visibility, and less downtime.

Cisco Hyperflex

HyperFlex, Cisco’s answer to hyperconvergence, is being successfully utilized by a wide range of industries. The following are a few of the many ways in which organizations of all sizes are enjoying Cisco HyperFlex:

Virtual Desktops

There was a time, not too long ago, when companies couldn’t pull the trigger on a virtual desktop solution due to the high upfront costs. Sure, they loved the idea, but just couldn’t make it fit into their budget. Hyperflex not only addresses the prohibitive cost issue, but does so by successfully tackling another one that organizations investigating a virtual desktop infrastructure (VDI) were faced with―complexity.

Branch of Remote Offices

Whether through organic growth or due to a merger or acquisition, one thing is certain―your organization’s IT needs today will soon look different. So whether growth includes more employees, more locations, or both, HyperFlex allows for an easy way to deploy hardware wherever it’s needed while being managed from a central location.

Server Virtualization

With HyperFlex, virtual server resources can be reallocated as needed to address the changing demands on storage, compute, and networking. Legacy systems require different approaches to each (see Complexity).


Developers are always under the gun to rapidly roll out solutions to address ever-evolving business needs. Without hyperconvergence, however, their job to do so is much more taxing, as hardware provisioning needs to be separately considered for storage, networking, virtualization and compute. This is exacerbated because Agile project management and development requires regular, on-going testing and remediation. With Cisco HyperFlex, virtualized hardware can be easily configured to accommodate frequent revisions and testing.

Cisco HyperFlex provides Software-Defined…

…Compute. Cisco’s Unified Computing System (Cisco UCS) is the foundation on which Hyperflex is built, and provides an easy, single point of management so resources can be easily adjusted to address the shifting needs of businesses.

…Storage. Cisco’s HyperFlex HX Data Platform software is a super high-performance file system that supports hypervisors (Virtual Machine Monitor (VMM)) with optimization and data management services.

…Networking. Cisco’s UCS provides a highly adaptive environment that offers easy integration with Cisco Application Centric Infrastructure (Cisco API), which is Cisco’s software-defined networking (SDN) solution that delivers hardware performance with software flexibility.

Call on the experts

To find out more about Cisco HyperFlex and what hyperconvergence can do for your organization, contact GDT’s hyperconvergence experts at They’d love to hear from you.




Why Companies are Turning to Mobility Managed Solutions (MMS)

By Richard Arneson

If mobility isn’t one of the most used words of the past ten (10) years, it’s got to be a close second. And mobility is no longer just about using Smart phones or tablets to purchase Christmas presents and avoid trips to the shopping mall. Mobility is transforming the way businesses operate, how their employees collaborate, and, ultimately, how it can generate more revenue. With the rapidly increasing implementation of BYOD (Bring Your Own Device), companies need to ensure that connectivity is fast, reliable and provides seamless, highly secure connectivity. And with the Internet of Things (IoT), companies can now offer customers immediate value and utilize advanced data analytics to better understand buyers’ tendencies and purchasing behaviors.

With so much at stake, it’s critical that companies carefully develop a mobility strategy that helps employees optimize their time and ultimately deliver bottom line results. Following are some of the many reasons why companies are turning to MMS providers to ensure they’ll get the most out of their mobility solutions.


Counting on your existing IT staff to have the necessary skillsets in place to create, then implement, a mobility strategy could end up costing your organization considerable time and money. Having them attempt to ramp up their mobility education is fine, but it lacks one key component―experience. You wouldn’t have a surgeon with no prior hands-on experience operate on you or a loved one. Why do the same with your company’s mobility strategy?


Lack of experience goes hand-in-hand with poor time management. In other words, the less experience, the longer it will take. And pulling existing IT staff off other important key initiatives could mean putting projects on hold, if not cancelling them altogether. And the time it takes to remediate events that have occurred due to the lack of empirical knowledge will only exacerbate the issue.


With the ever-increasing demands for mobility solutions and applications, ensuring that company data is critically protected can’t be overlooked or handled piecemeal. Doing so will leave you in reactive, not proactive, security mode. Mobile security is being enhanced and improved on a regular basis, but without the needed expertise on staff, those security enhancements could fall on deaf ears. Also, an experienced Mobility Managed Solutions provider can help you set needed security policies and guidelines.

Maximizing Employee Productivity

One of the key reasons companies develop and enhance mobility solutions is to help ensure employee productivity is maximized. Not conducting fact-finding interviews with different departments to understand their existing and evolving demands will mean your mobility strategy is only partially baked. And trying to retro-fit solutions to address overlooked elements will result in additional time and unnecessary costs.


Mobility solutions aren’t a set-it-and-forget-it proposition. They must be managed, monitored and optimized on a regular basis. Updates need to maintained and administered. And as with any new technology roll-out, there will be confusion and consternation, so technical support needs to be prepped and ready before trouble tickets start rolling in.

Best Practices

There are a number of best practices that must be considered when developing and implementing mobility solutions. Are you in a heavily-regulated industry and, if so, does it adhere to industry-related mandates? Have mobile form factors and operating systems been taken into consideration? Will roll-out be conducted all at once or in a phased approach? If phased, have departmental needs been analyzed and prioritized? Have contingency plans been developed in the event roll-out doesn’t perfectly follow the script you’ve written?


Lacking the mobility experience and skillsets on staff could mean unnecessary costs are incurred. In fact, studies have shown that companies utilizing a MMS provider can save anywhere from 30 to 45% per device.

Experienced Expertise

Each of the aforementioned regarding mobility solutions are critically important, but all fall under one (1) primary umbrella―experience. You can read a book about how to drive a car, but it won’t do you much good unless you actually drive a car. It’s all about the experience, and mobility solutions are no different. Hoping you have the right skillsets on staff and hoping it will all work out are other ways of saying High Risk. Hope is not a good mobility solutions strategy.

If you have questions about your organization’s current mobility strategy, or you need to develop one, contact GDT’s Mobility Solutions experts at They’re comprised of experienced solutions architects and engineers who have implemented mobility solutions for some of the largest organizations in the world. They’d love to hear from you.

GDT hosts VMware NSX Workshop


On Thursday, June 28th, GDT hosted a VMware NSX workshop at GDT’s Innovation Campus. It was a comprehensive, fast-paced training course that focusds on installing, configuring, and managing VMware NSX™. It covered VMware NSX as a part of the software-defined data center platform, including functionality operating at Layers 2 through 7 of the OSI model. Hands-on lab activities were included to help support attendees’ understanding of VMware NSX features, functionality, and on-going management. Great event, as always!


GDT Lunch & Learn on Agile IoT

On Tuesday, June 19th, GDT Associate Network Systems Engineer Andrew Johnson presented, as part of the GDT Agile Operations (DevOps) team’s weekly Lunch & Learn series, info about the wild world of IoT (Internet of Things). Andrew provides a high level overview of what IoT is and what can be done when all things are connected.  As more and more devices get connected, the ability to draw rich and varied information from the network is changing how companies, governments and individuals interact with the world. 

Why this market will grow 1200% by 2021!

According to an IDC report that was released in 2017, it was predicted the SD-WAN market would grow from a then $700M to over $8B by 2021. They’ve revised that figure. Now it’s over $9B.

SD-WAN is often, yet incorrectly, referred to as WAN Optimization, but that’s actually a perfect way to describe what SD-WAN delivers. The sundry WAN solutions of the past twenty-five (25) years―X.25, private lines (T1s/DS3s) and frame relay―gave way to Multi-Protocol Label Switching (MPLS) in the early 2000’s.

MPLS moved from frame relay’s Committed Information Rate (CIR)―a throughput guarantee―and offered Quality of Service (QOS), which allows customers to prioritize time sensitive traffic, such as voice and video. MPLS has been the primary means of WAN transport over the last fifteen (15) years, but SD-WAN provides enterprises and service providers tremendous benefits above and beyond MPLS, including the following:

Easier turn-up of new locations

With MPLS, as with any transport technology of the past, turning up a new site or upgrading an existing one is complex and time consuming. Each edge device must be configured separately, and the simplest of changes can take weeks. With SD-WAN, setting up a new location can be provisioned automatically, greatly reducing both time and complexity.

Virtual Path Control

SD-WAN software can direct traffic in a more intelligent, logical manner, and is also, like MPLS, capable of addressing QoS. SD-WAN can detect a path’s degradation and re-route sensitive traffic based on its findings. Also, having backup circuits stand by unused (and costing dollars, of course) is a thing of the past with SD-WAN.

Migration to Cloud-based Services

With traditional WAN architectures, traffic gets backhauled to a corporate or 3rd party data center, which is costly and reduces response times. SD-WAN allows traffic to be sent directly to a cloud services provider, such as AWS or Azure.


SD-WAN provides a centralized means of managing security and policies, and utilizes standards-based encryption regardless of transport type. And once a device is authenticated, assigned policies are downloaded and cloud access is granted―quick, easy. Compare that to traditional WANs, where security is handled by edge devices and firewalls. Far more complex and costly.

…and last, but not least

SD-WAN can greatly reduce bandwidth costs, which are often the greatest expense IT organizations incur, especially if they’re connecting multiple locations. MPLS circuits are pricey, and SD-WAN can utilize higher bandwidth, lower cost options, such as broadband or DSL.

Does SD-WAN mark the end of MPLS?

Given the stringent QoS demands of some enterprise organizations, and the fear that SD-WAN won’t be able to accommodate them, it’s unlikely that SD-WAN will totally replace MPLS. And some organizations are simply averse to change, and/or fear their current IT staff doesn’t have the necessary skillsets to successfully migrate to SD-WAN, then properly monitor and manage it moving forward.

Call on the SD-WAN experts

To find out more about SD-WAN and the many benefits it can provide your organization, contact GDT’s tenured SD-WAN engineers and solutions architects at They’ve implemented SD-WAN solutions for some of the largest enterprise networks and service providers in the world. They’d love to hear from you.

Calculating the costs, hard and soft, of a cloud migration

When you consider the costs of doing business, you might only see dollar signs―not uncommon. But if your organization is planning a cloud migration, it’s important to understand all costs involved, both hard and soft. Sure, calculating the hard costs of a cloud migration is critically important―new or additional hardware and software, maintenance agreements, additional materials, etc.―but failing to consider and calculate soft costs could mean pointed questions from C-level executives will embarrassingly go answered. And not knowing both types of costs could result in IT projects and initiatives being delayed or cancelled—there’s certainly a cost from that.

When you’re analyzing the many critical cloud migration components―developing risk assessments, analyzing the effects on business units, applications and interoperability―utilize the following information to help you uncover all associated costs.

First, you’ll need a Benchmark

It’s important to first understand all costs associated with your current IT infrastructure. If you haven’t calculated that cost, you won’t have a benchmark against which you can evaluate and compare the cost of a cloud migration. Calculating direct costs, such as software and hardware, is relatively easy, but ensure that you’re including additional expenses, as well, such as maintenance agreements, licensing, warranties, even spare parts, if utilized. And don’t forget to include the cost of power, A/C and bandwidth. If you need to confirm cost calculations, talk with accounts payable―they’ll know.

Hard Costs of a Cloud Migration (before, during and after)


Determining the hard costs related to cloud migrations includes any new or additional hardware required. That’s the easy part―calculating the monthly costs from cloud service providers is another issue. It has gotten easier, especially for Amazon Web Services (AWS) customers. AWS offers an online tool that calculate the Total Cost of Ownership (TCO) and Monthly Costs. But it’s still no picnic. Unless you have the cloud-related skillsets on staff, getting an accurate assessment of monthly costs might require you incur another, but worthwhile, hard cost―hiring a consultant who understands and can conduct a risk assessment prior to migration.


Cloud service providers charge customers a fee to transfer data from existing systems. And there might be additional costs in the event personnel is needed to ensure customers’ on-prem data is properly synced with data that has already been transferred. Ensuring this data integrity is important, but not easy, especially for an IT staff that is not experienced with prior cloud migrations.


Other than the monthly costs you’ll incur from your cloud provider of choice, such as AWS or Azure, consideration must be given to the ongoing maintenance costs of your new cloud environment. And while many of these are soft costs, there can be hard costs associated with them, as well, such as the ongoing testing of applications in the cloud.

The Hard-to-Calculate Soft Costs

If they’re not overlooked altogether, soft costs are seldom top-of-mind. Determining the value of your staff’s time isn’t hard to calculate (project hours multiplied by their hourly rate, which is calculated by dividing weekly pay by 40 (hours)), but locking down the amount of time a cloud migration has consumed isn’t easy. Now try calculating one that hasn’t taken place yet. There might be a cost in employee morale, as well, in the event the cloud migration doesn’t succeed or deliver as planned.

Consider the amount of time required to properly train staff and keep them cloud-educated into perpetuity―today’s cloud will look a lot different than future generations.

The testing and integrating of applications to be migrated takes considerable time, as well, and several factors must be considered, such as security, performance and scalability. Testing should also include potential risks that might result in downtime, and ensuring interoperability between servers, databases and the network.

Also, there’s a far greater than 0% chance your cloud migration won’t go exactly as planned, which will require additional man hours for proper remediation.

There are also soft costs associated with projects that are put on hold, especially if they delay revenue generation.

If questions exist, call on the experts

Here’s the great news―moving to the cloud, provided the migration is done carefully and comprehensively, will save considerable hard and soft costs now and in the future. Calculating the costs of a cloud migration is important, but not an easy or expeditious venture.

If you have questions about how to accurately predict the costs of a future cloud migration, contact GDT’s Cloud Experts at They’d love to hear from you.

A Fiber Optic First

It’s one of those “Do you remember where you were when…?” questions, at least for those at least fifty-years-old. And it didn’t just affect those in northern, hockey-friendly states. People as far south as Texas stopped their cars at the side of the road and began honking their car horns, then breaking into The Star-Spangled Banner and America the Beautiful while passing motorists sprayed them with wet grime. It was Friday, February 22nd when radios nationwide announced that the impossible had occurred at the 1980 Winter Olympics in Lake Placid, NY—the United States hockey team, comprised primarily of college-aged amateur athletes, had just defeated the Soviet Union Red Army team, considered by most familiar with the sport as the best hockey team of all time.

The closing seconds, announced worldwide by legendary sportscaster Al Michaels, became arguably the most well-known play-by-call in sports history:

11 seconds, you’ve got 10 seconds, the countdown going on right now! Morrow, up to Silk. Five seconds left in the game. Do you believe in miracles? YES!”

Legendary for several reasons

The game, which Sports Illustrated named the greatest sporting event in American history, is legendary for other reasons, as well. The TV broadcast of the game actually occurred later that evening during prime time on ABC, and was part of the first television transmission that utilized fiber optics. While it didn’t deliver the primary TV transmission, it was used to provide backup video feeds. Based on its success, it became the primary transmission vehicle four (4) years later at the 1984 Winter Olympics in Lilliehammer, Norway.

Why fiber optics will be around―forever

It’s no wonder fiber optics carries the vast majority of the world’s voice and data traffic. There was a time (late 1950’s) when it was believed satellite transmission would be the primary, if not exclusive, means for delivering worldwide communications. It wasn’t the Olympics, but a 1959 Christmastime speech by President Eisenhower to allay Americans’ Cold War fears that was the first delivered via satellite. But if you’re a user of satellite television, you’ve certainly experienced network downtime that comes with heavy cloud cover or rain.

And wireless communications, such as today’s 4G technology (5G will be commercially available in 2020), requires fiber optics to backhaul data from wireless towers back to network backbones, which is then delivered to its intended destination via…fiber optics.

The question regarding fiber optics has been debated for years: “Will any technology on the horizon replace the need for fiber optics?” Some technologists (although there appears to be few) say yes, but most say no―as in absolutely no. Line of sight wireless communications are an option, and have been around for years, but deploying them in the most populated areas of the country―cities―is impractical. If anything stands between communicating nodes, you’ll be bouncing your signal off a neighboring building. Not effective.

Facebook will begin trials in 2019 for Terragraph, a service they claim will replace fiber optics. Sure, it might in some places, such as neighborhoods, but is only capable of transmitting data to 100 ft. or less. It’s the next generation of 802.11, but, while it’s capable of transmitting data at speeds up to 30 Gbps, it’s no option for delivering 1’s and 0’s across oceans.

Fiber is fast, it’s durable, and it lasts a long time. Yep, fiber optics will be around for a while.

Did you know?

  • Fiber optics can almost travel at the speed of light, and isn’t affected by EMI (electromagnetic interference).
  • Without electricity coursing through it, fiber optics doesn’t create fire hazards. And add to that fact―it’s green, as in eco-friendly green. And it degrades far less quickly than its coax and copper counterparts.
  • Fiber is incredibly durable, and isn’t nearly as susceptible to breakage than copper wire or coaxial cable. Also, fiber has a service life of 25-35 years.
  • There’s less attenuation with fiber, meaning there’s a greatly reduced chance it will experience signal loss.
  • With Dense Wave Division Multiplexing (DWDM), the fiber’s light source can be divided into as many as eighty (80) wavelengths, with each carrying separate, simultaneous signals.

Call on the experts

If you have questions about how optical networking can help your organization get the most out of optical networking, contact The GDT Optical Transport Team at They’re comprised of highly experienced optical engineers and architects, and support some of the largest enterprise and service provider networks in the world.

Migrating to the Cloud? Consider the following

First, follow Stephen Covey’s unintentional Cloud Migration advice

Stephen Covey, in his 1989 bestselling book The 7 Habits of Highly Effective People, lists “Begin with the end in mind” as the second habit. But in the event you’re considering a cloud migration for your organization, Covey’s second habit should be your first.

Yes, you must first fully understand the desired end results for moving to the cloud before you do so. Whether it’s cost savings, greater flexibility, more robust disaster recovery options, better collaboration options, work-from-anywhere options, automatic software and security updates, enhanced competitiveness in the marketplace, and better, safer controls over proprietary information and documentation, you need to ensure the precise goals are outlined and communicated so everybody in your organization understands the “end in mind.” There needs to be a carefully considered reason prior to your journey. You don’t get in the car and start driving without knowing where you want to go; why would you do it on your cloud journey?

Prior to any cloud migration, you must do exactly what you would prior to any other type of journey―go through your “To-Do” checklist. Without this level of scrutiny, your cloud migration will gloss over, if not totally exclude, key elements that need to be considered ahead of time. But not checking off necessary considerations prior to a cloud migration will be far more defeating than not packing your favorite pillow or a toothbrush. Trying to correct problems from a poorly planned cloud migration can cost considerable time, expense and credibility.

The following will give you an idea of the key questions that must be asked, and carefully considered and answered, prior to beginning your organization’s cloud journey.

What’s your Cloud Approach?

Will you be utilizing a public or private cloud model, or a combination (hybrid) of the two (2)? Will you maintain certain apps on-premises or in a data center, and be using more of a Hybrid IT approach? The answer to these questions involves several key elements, including, to name a few, existing licenses, architectures and transaction volume. And considering the “6 R’s” regarding Cloud migrations will greatly assist in helping you develop the right Cloud Approach:


Empirically speaking, it’s not uncommon for organizations to discover that as much as 20-30% of their current applications aren’t being utilized and are prime candidates for total shut down.


Determine which applications should remain managed on-prem. For instance, certain latency- or performance-sensitive applications, or any that involve sensitive and/or industry-regulated data, might not be right for the cloud. There are several applications that are simply not supported to run in the cloud, and some require specific types of servers or computing resources.


Which applications will benefit by moving to, once migrated to the Cloud, a different platform to save time and hassles related to database management. Amazon Relational Data Service (Amazon RDS) is a database-as-a-service (DBaaS) that makes setting up, operating and scaling relational databases in the cloud much easier.


Often referred to as “lift and shift”), moving certain applications to the Cloud can often more easily be accomplished with existing automation tools, such as AWS’s VM Import/Export).


Which current applications can be replaced and utilized in the Cloud (SaaS)?


If scaling, enhanced performance, or adding new features can be accomplished via a Cloud Migration, they might need to be re-factored or -architected.

What’s the Prioritization Order of Applications that will be Migrated?

It probably won’t come as a surprise to hear that the least critical applications should be migrated first. Start with applications that won’t leave your entire organization hamstrung if down or inaccessible, and work up from there. Subsequent, more critical application migrations will benefit from the prior experience(s).

Are Security Concerns being considered?

Think about each of the network security demands and policies that must be closely monitored and adhered to. How will they be affected from a cloud migration? Think about any industry-related requirements, such as HIPAA, PCI and those mandated by FERC or the FTC? As data migrates to the public cloud, so changes in governance strategies will probably need to be addressed.

Are the Needed Cloud Migration Skillsets on staff?

Trying to retrofit existing IT personnel with a slew of quick-study certifications will leave one important element out of the equation―experience. Think of it this way; you can read a book about swimming, but it doesn’t really mean much until you get in the water. So, if your staff has only read about cloud migrations, you’ll probably want to turn to somebody who’s been in the cloud migration water for years. And doing so will help educate your staff, even provide them with the confidence to test new approaches.

Have costs been carefully considered?

Ask IT personnel why they’re moving to the cloud, and if “to save costs” isn’t mentioned first, it soon will be. Yes, moving to the cloud can save considerable costs (if done correctly), but no two (2) environments are alike when it comes to the degree of savings moving them to the cloud will deliver. In fact, some legacy applications might cost more if moved to the cloud. And additional bandwidth and associated costs must be taken into consideration, as well. Also, make sure you understand how licensing for each application is structured, and whether the licensing is portable if moved to the cloud.

Call on the experts

Moving to the cloud is a big journey, and doing so could be one of the biggest in your career. The question is, “Will it be a positive or negative journey?” Turning to experienced Cloud experts like those at GDT can point your cloud migration needle in a positive direction. They hold the highest levels of Cloud certifications in the IT industry, and can be reached at They’d love to hear from you.













Are you Cloud-Ready?

Let’s face it, moving to the cloud is sexy. It’s the latest thing―at least as far as the general public is concerned―and proudly stating “We’re moving everything to the cloud” sounds modern, cutting-edge, even hip (if you want to impress people at a cocktail party, inform them that the concept has actually been around for fifty (50) years. The Cloud’s real impact, however, was felt in the late 1990’s when Salesforce came onto the scene and began delivering an enterprise application to customers via their website). Yes, everybody, it seems, wants to move to the cloud.

While many might feel their organization is cloud-ready, the truth is most are not. It seems and sounds so simple to move applications to the cloud (you just log into a website and start using the application, right?), but a lot of preparation, interviews and fact-finding must be conducted ahead of time.

The following are a list of questions you should ask yourself prior to a cloud migration. If companies don’t ask themselves, and be able to answer, the following questions, their cloud migration will leave them wondering if moving to the cloud was such as great idea in the first place.

Why are you moving to the Cloud?

If “Because it’s the thing to do” is your answer, even if you’re too embarrassed to state it publicly, it’s time to give the question deep-diving, considerable thought. And “Because it will save costs” isn’t enough prep, either. The Cloud offers many benefits, of course, but to fully realize them requires extensive knowledge regarding how to get them. If cloud migrations are completed correctly and comprehensively, your organization can enjoy greater flexibility, more robust disaster recovery options, capital expenditure savings, more effective collaboration, work-from-anywhere options, automatic software and security updates, enhanced competitiveness in the marketplace, and better, safer controls over proprietary information and documentation. But to get any or all of those, your current environment first needs to be risk assessed.

Have you conducted a Risk Assessment?

Risk assessments are a critical component of cloud migrations. Consideration needs to be given to:

  • Savings, both in costs and time
  • How the cloud solution can, and will, be right-sized to meet the unique demands of your organization
  • The role automation, if needed, will play in your cloud deployment
  • How staff resources will be managed, including any previous cloud expertise and skillsets you have on staff
  • The ongoing monitoring of the cloud solution, and the ability to analyze usage and make necessary adjustments when needed (and they will be needed)
  • Security needs, including compliance with any industry-related regulations, such as, to name a couple, HIPAA and PCI
  • How sensitive data will be protected
  • Disaster recovery, including backups and auto-recovery
  • The ability to satisfy Recovery Point Objectives (RPOs) and Recovery Time Objectives (RTOs)

Failure to consider and satisfy any of the aforementioned could mean your cloud migration is doomed to fail. Again, a detailed, comprehensive risk assessment is a critical component that must be conducted prior to building a cloud migration strategy.

How will moving to the cloud affect business operations, not just IT?

Thinking outside the IT box is critically important. Interviews with key stakeholders from all business units―finance, marketing, accounting, project management, sales, DevOps, HR, etc.―need to be conducted to determine and understand their practices and goals, and how the cloud migration will affect, and enhance, them. A thorough analysis of the current environment needs to be conducted to understand how departments work interdependently. IT infrastructure, security, application dependencies and cost analysis needs to be considered for each.

Are my applications Cloud-Ready?

It’s important to understand which applications are well-suited to move to the cloud, including related options for each. Some applications should be moved to the cloud, some should be in a private cloud, and others shouldn’t, or can’t, be moved at all. Each organization has unique needs and requirements, and all need to be incorporated into a migration plan that both organizes and prioritizes them so desired results can be achieved. For instance, certain mission-critical applications probably shouldn’t be migrated first, as their downtime might bring the entire organization to a grinding and costly halt.

Call on the Experts

The many benefits of moving to the Cloud are achievable, but getting there requires a level of expertise and associated skillsets that most organizations don’t already have on staff. If you have questions about moving to the Cloud, regardless of the size of your organization and its associated infrastructure, contact the GDT Cloud experts at They’d love to hear from you.

When SOC plays second fiddle to NOC, you could be in for an expensive tune

It’s not uncommon for people, even some IT professionals, to assume all of their organization’s security needs are being addressed through their NOC (Network Operations Center). Chances are, they’re not. NOCs and SOCs (Security Operations Centers) are entirely different animals, however, with varying goals and staffed by IT professionals with different skillsets and security-related industry certifications. Sure, they both identify issues, then work to resolve them, but most of the similarities end there.

In 2017, well over 4 billion records were exposed to cyberattacks. Believing your company is somehow shielded from them because it’s not of the Fortune 500 variety is a fool’s paradise. No company, regardless of its size or the industry within which it operates, is immune from threats. In a recent Global Information Security survey, only half of the participating organizations believed they could even detect or predict a cyberattack. Amazingly, many organizations view security as an afterthought, and cobble together a security plan with existing personnel who are ill-equipped to handle the intricacies and demands needed to fend off the bad guys―unfortunately, there are a lot of them.

The SIEM―what it is, and why it’s critically important

It can be argued that the SIEM (Security Information and Event Management system) is the fuel that makes the SOC engine run. It collects information from devices that are on or access the network, including login attempts and data transfers, then alerts security professionals of any potential threats. There was a time when SIEMs got a bad rap, some of it deservedly so. At one time, they generated a lot of false positives, which resulted in many “boy who cried wolf” scenarios. Many customers didn’t trust them to reliably provide usable information, at least on a regular basis, and quite possibly ignored alerts on actual threats. Thankfully, however, SIEMs have gotten far more accurate and reliable in recent years, in part because they now allow for far more customization, both in reporting and automated responses.

Don’t hand the SIEM reins over to anybody

Having a SIEM isn’t a set it and forget it proposition. Dealing with security threats is a digital cat and mouse game. New cyberattacks are being invented every day, and the types of threats, such as phishing, DDoS and Trojans (to name a few) are plentiful. And even if you provide extensive, internal training, you’ll never be able to fully you’re your company’s biggest threat―end users, many of whom have a seemingly innate ability to allow, even unknowingly invite, security threats onto the network.

Specialized Security Skillsets

It’s a security analyst’s job to understand the greatest asset threats, and understand which of the customer’s assets take the highest priorities. They can create mock attack scenarios to ensure the SOC can, and will, respond when real attacks occur. From this, they can better customize security detection and ensure responses are structured accordingly.

Threat Intelligence

A key element that security analysts provide is threat intelligence, which is the proactive understanding of existing threats or those on the horizon, including, of course, how to defend against them. Ask an IT professional about their organization’s threat management plan and mediations they have in place to address the vast array of existing or future threats, and you’ll probably be met with stunned silence. If they’re not well-versed in security, chances are existing and impending threats haven’t been considered. And if they haven’t been considered, it goes without saying that they’re not prepared to defend against them.

Plugging Security Gaps

Cybercriminals are essentially looking for one thing―vulnerabilities. Not fully understanding where network vulnerabilities exist can leave organizations wide open for attacks. Some of these vulnerabilities can be addressed with simple software patches, but if nobody on staff is closely monitoring and implementing them, you’ve made an unconscious decision to leave many security gaps unaddressed. It may or may not come as a surprise that most organizations don’t have a well-defined security patch management plan in place.

Monitored and Managed 24x7x365

Providing on-going, real-time management and monitoring of an organization’s endpoints, networks, services and databases 24×7 is critical when defending against threats. Your SOC is only as good as its weakest link, and if providing this level of security and scrutiny isn’t possible, you’ve just defined a very weak link. Threat detection and related responses must be timely, regardless of threat type, time of day or day of week.

For questions, call on the experts at GDT

Sure, companies can operate their own SOC, but whether it’s done in-house or with a 3rd party managed security solutions provider, it should be managed, maintained and monitored by tenured security analysts who think, live and breathe security. Anything less might soon leave you wondering why you ever thought a SOC could play second fiddle to the NOC. And security analysts, when combined with advanced automation solutions, will greatly enhance your defense against cyberattacks and security breaches.

For more information about GDT’s SOC Managed Services, or if you have questions about anything related to IT security, contact GDT’s security professionals here. They’d love to hear from you.

And if you’d like to better address some of your network security concerns, subscribe to GDT’s Vulnerability Alerts, which contain information and links to software patches.





GDT Lunch & Learn on Data Breaches–Protecting the Corporate Consumer

On Tuesday, May 22nd, GDT SOC Analyst Moe Janmohammad presented, as part of the GDT Agile Operations (DevOps) team’s weekly Lunch & Learn series, information about data breaches. They’re seemingly a weekly occurrence these days, and while there has been a lot of discussion around protecting consumers, very little is being done for the corporate purchaser.  Watch and learn how companies and individuals can understand what their risk profile is, and when and where they may have already been compromised.

GDT and QTS Enter Into Cloud and Managed Services Partnership

Agreement represents continued successful execution on QTS’ strategic growth plan

QTS Realty Trust (NYSE: QTS), a leading provider of software-defined and mega-scale data center solutions, today announced that it has entered into a strategic partnership with GDT, an international provider of managed IT solutions, representing a key step in QTS’ strategic growth plan announced in February 2018. Under the agreement, QTS will transition certain cloud and managed services customer contracts and support to GDT. QTS expects to complete its transfer of approximately 200 specific customers to GDT by the end of 2018.

Under the terms of the agreement, GDT will expand its colocation presence within QTS facilities to support customers as they are migrated to GDT’s platform. As GDT is an existing QTS partner and CloudRamp customer, QTS will facilitate a seamless integration with GDT through its Service Delivery Platform (SDP), which will provide customers enhanced visibility and control of their IT environments. Upon transition of the customers, GDT will maintain the current service level and support pursuant to the terms of each individual customer contract.

“We are pleased to partner with GDT, a leading managed IT provider and current QTS CloudRamp customer, to extend our hybrid solution capabilities while maintaining the consistent world-class service and support our customers have come to expect,” said Chad Williams, Chairman and CEO – QTS.

“This agreement also represents the next step in our strategic plan to accelerate growth and profitability,” Mr. Williams continued. “Consistent with our goal of narrowing the scope of cloud and managed services that we directly deliver, this partnership improves our ability to continue to deliver a differentiated hybrid solution, while unlocking enhanced profitability and future growth opportunities for QTS. Through SDP, we can enable a broader set of services for our customers through partner platforms including public cloud providers, Nutanix for Private Cloud, Megaport and Packetfabric for universal software-defined connectivity, and now GDT for managed hosting and other IT solutions.”

As part of the agreement, GDT will pay QTS a recurring partner channel fee based on revenue that is transitioned, as well as future growth on those accounts. While the financial benefit to QTS during the year will be relatively modest as the accounts are transitioned, this partnership arrangement is expected to support future revenue growth and profitability, beginning in 2019 and beyond, without significant cost to QTS. QTS expects that, in transitioning customer contracts to GDT, the Company will be able to drive accelerated leasing performance and growth, improve predictability in its business and significantly enhance overall profitability.

“We are pleased to expand our partner ecosystem with QTS, one of the leading innovators in the data center space,” said GDT CEO, JW Roberts. “This new partnership will greatly enhance our customer-first focus and our ability to consistently deliver innovative solutions to the IT industry. We look forward to managing a smooth customer transition and delivering additional value.”

In connection with today’s announcement, QTS also announced that the Company will issue its financial results for the first quarter ended March 31, 2018 before market open on Wednesday, April 25, 2018. The Company will also conduct a conference call and webcast at 7:30 a.m. Central time / 8:30 a.m. Eastern time. The dial-in number for the conference call is (877) 883-0383 (U.S.) or (412) 902-6506 (International). The participant entry number is 7555289# and callers are asked to dial in ten minutes prior to start time. A link to the live broadcast and the replay will be available on the Company’s website ( under the Investors tab.

About GDT 

Headquartered in Dallas, TX with approximately 700 employees, GDT is a global IT integrator and solutions provider approaching $1 Billion in annual revenue. GDT aligns itself with industry leaders, providing the design, build, delivery and management of IT solutions and services.

About QTS 

QTS Realty Trust, Inc. (NYSE: QTS) is a leading provider of data center solutions across a diverse footprint spanning more than 6 million square feet of owned mega scale data center space throughout North America. Through its software-defined technology platform, QTS is able to deliver secure, compliant infrastructure solutions, robust connectivity and premium customer service to leading hyperscale technology companies, enterprises, and government entities. Visit QTS at, call toll-free 877.QTS.DATA or follow on Twitter @DataCenters_QTS.