Solutions Blog

Flash, yes, but is it storage or memory?

By Richard Arneson

We’ve all been pretty well trained to believe that, at least in the IT industry, anything defined or labeled as “flash” is a good thing. It conjures up thoughts of speed (“in a flash”), which is certainly one of most operative words in the industry―everybody wants “it” done faster. But the difference between flash memory and flash storage is often confused, as both not only store information, but are both referred to as Solid State Storage. For instance, a thumb drive utilizes flash memory, but is considered a storage device, right? And both are considered solid state storage devices, which means neither is mechanical, but electronic. Mechanical means moving parts, and moving parts means prone to failure from drops, bumps, shakes or rattles.

Flash Memory―short-term storage

Before getting into flash memory, just a quick refresher on what memory accomplishes. Memory can be viewed as short-term data storage, maintaining information that a piece of hardware is actively using. The more applications you’re running, the more memory is needed. It’s like a workbench, of sorts, and the larger its surface area, the more projects you can be working on at one time. When you’re done with a project, you can store it long-term (data storage), where it’s easily retrieved when needed.

Flash memory accomplishes its tasks in a non-volatile manner, meaning it doesn’t require power to function. It’s quickly accessible, smaller in size, and more durable than volatile memory, such as RAM (Random Access Memory), which requires the device to be powered on to access. And once it’s turned off, data in RAM is gone.

Flash Storage―storage for the long term

Much like a combustion engine, flash storage, the engine, needs flash memory, the fuel, to run. It’s nonvolatile (doesn’t require power), and utilizes one of two (2) types of flash memory―NAND or NOR.

NAND flash memory writes and reads data in blocks, while NOR does it in independent bytes. NOR flash is faster and more expensive, and better for processing small amounts of code―it’s often used in mobile phones. NAND flash is generally used for devices that need to upload and/or replace large files, such as photos, music or videos.

Confusion between flash storage and flash memory might be non-existent for some, maybe even most, but it’s astounding how much information either confuses the two (2) or does a poor job differentiating them.

Contact the Flash experts

For more information about flash storage, including all-flash arrays, which contain many flash memory drives and are ideal for large enterprise and data center solutions, contact the talented, tenured solutions architects and engineers at GDT. They’re experienced at designing and implementing storage solutions, whether on-prem or in the cloud, for enterprises of all sizes. You can reach them at

When considering an MSP, don’t forget these letters: ITSM and ITIL

By Richard Arneson

It’s not hard to find a Managed Services Provider (MSP); the hard part is finding the right one. Of course, there are many, many things to consider when evaluating MSPs, including the quality of its NOC and SOC (don’t forget the all-important SOC), the level of experienced professionals who manage and maintain it on a second-by-second basis, the length of time they’ve been providing managed services, the breadth and depth of their knowledge, and the range of customer sizes and industries they serve. But there’s something else that should be considered, and asked about, if you’re evaluating Managed Services Providers (MSP)―whether they utilize ITSM and ITIL methodologies.

ITSM (Information Technology Service Management)

ITSM is an approach for the design, delivery, management and overall improvement of an organization’s IT services. Quality ITSM delivers the right people, technology, processes and toolsets to address business objectives. If you currently manage IT services for your organization, you have, whether you know it or not, an ITSM strategy. Chances are that if you don’t know you have one, it might not be very effective, which could be one (1) of the reasons you’re evaluating MSPs.

Ensure the MSPs you’re evaluating staff their NOC and SOC with professionals who adhere to ITSM methodologies. If an ITSM is poorly constructed and doesn’t align with your company’s goals, it will negatively reflect on whether ITIL best practices can be achieved.

ITIL (Information Technology Infrastructure Library)

ITIL is a best practices framework that helps align IT with business needs. It outlines complete guidelines for five (5) key IT lifecycle service areas: Service Design, Service Strategy, Service Transition, Service Operations, and Continued Service Improvement. ITIL’s current version is 3 (V.3), so not only ensuring they follow ITIL methodologies is important, but make certain they’re well-versed in ITIL V.3., which addresses twenty-eight (28) different business processes that affect a company’s ITSM.

Here’s the difference in ITSM and ITIL that you need to remember

ITSM is how IT services are managed. ITIL is a best practices framework for ITSM. So, put simply, ITSM is what is what you do, and ITIL is how to do it. ITIL helps make sense of ITSM processes. ITIL isn’t the only certification of its type in the IT industry, but is undoubtedly the most widely used.

Without understanding the relationship between ITSM and ITIL, companies won’t gain business agility, operational transparency, and reductions in downtime and costs. And if your MSP doesn’t understand that relationship, they’re far less likely to deliver the aforementioned benefits.

For more info, turn to Managed Services Experts

Selecting an MSP is a big decision. Turning over the management of your network and security can be a make-or-break decision. Ensuring that they closely follow ITSM and ITIL methodologies is critically important.

For more information about ITSM and ITIL, contact the Managed Services professionals at GDT. They manage networks and security for some of the largest companies and service providers in the world from their state-of-the-art, 24x7x365 NOC and SOC. You can reach them at

Infrastructure Modernization to Handle the most Demanding of Applications

By Richard Arneson

In June, HPE announced the creation of its New Compute Experience, which is powered by its Gen10 servers, one of the IT industry’s most secure pieces of equipment. HPE is the first equipment vendor to place silicon-based security into its servers, which addresses firmware attacks, one of the industry’s biggest threats.

Silicon-Based Security―unique, highly secure

Silicon-based security works at the firmware level, and each HPE Gen10 server creates a unique fingerprint for the silicon, so the server won’t boot up unless the firmware perfectly matches the print.

HPE also embedded proactive detection and recovery, which means the Gen10 servers scan millions of lines of code to hunt down any potential malware. And on top of that, they’ve applied advanced machine learning that identifies malicious behavior, which allows the Gen10 server to continually train itself. In short, HPE’s Gen10 servers ability to analyze patterns and suspicious activity gets better and better.

Intelligent Systems Tuning

HPE Intelligent System Tuning technology, which is exclusively available on the Gen10 servers, dynamically tunes the server’s performance based on unique workloads. Users enjoy improved throughput, reduced latency and cost savings.

Economic Control

Speaking of savings, HPE provides users the ability to pay only for the server resources used, and provides on-demand scaling without overprovisioning and generating unnecessary costs. Flexible payment models allow customers to better align costs with business outcomes, and scale based on future needs.

Persistent Memory

Persistent memory is solid-state memory that retains data longer than with DRAM memory, and can continue to be accessed even after the process that created it has ended. HPE’s NVDIMM (non-volatile dual in-line memory module) is installed on all Gen10 memory buses, which greatly reduces latency and maximizes compute power to address the most demanding of workloads.

HPE’s Fleet of Gen10 Servers

HPE ProLiant DL

The HPE ProLiant DL servers are rack-optimized and balance performance, manageability and capacity.

HPE ProLiant ML

The ProLiant ML is perfect for remote and branch offices, and growing businesses.

HPE Apollo Systems

Designed for massive scale-up and -down, the HPE’s Apollo System provides high-density, rack-scale compute, storage and networking solutions to address big data, object storage and HPC (High Performance Computing) workloads.

HPE Synergy

HPE Synergy is truly the world’s first platform architected strictly for composable infrastructure, and is built with a highly adaptable hybrid IT engine.

HPE BladeSystem

Designed to drive traditional and hybrid IT workloads across converged infrastructures, HPE’s BladeSystem provides scalable business performance with secure service delivery.

HPE ConvergedSystem 500

Designed for room-to-grow flexibility, the ConvergedSystem 500 is purpose-built, optimized and pre-integrated for mission-critical reliability with SAP HANA scale-up configuration.

Talk to the experts

For more information about HPE’s Gen10 servers and how they can modernize your infrastructure, contact the talented, tenured solutions architects and engineers at GDT. They’re experienced at designing and implementing solutions for enterprises of all sizes. You can reach them at


The story of the first Composable Infrastructure

By Richard Arneson

In 2016, HPE introduced the first composable infrastructure solution to the marketplace. Actually, they didn’t just introduce the first solution, they created the market. HPE recognized, along with other vendors and customers, some of the limitations inherent in hyperconvergence, which provided enterprise data centers a cloud-like experience with on-premises infrastructures. But HPE was the first company to address these limitations, such as the requirement for separate silos for compute, storage and network. What this meant was that if there was a need to upgrade one of those silos, the others had to be upgraded, as well, even if it wasn’t needed. And hyperconvergence required multiple programming interfaces; with composable, a unified API can transform the entire infrastructure with a single line of code.

HPE Synergy

HPE Synergy was the very first “ground-up” built composable infrastructure platform, and is the very definition of HPE’s Idea Economy, which is a concept to address, in their words, the belief “that disruption is all around us, and the ability is needed to turn an idea into a new product or a new industry.”

HPE set out to address the elements that proved difficult, if not impossible, with traditional technology, such as the ability to:

  • Quickly deploy infrastructure through flexibility, scaling and updating
  • Run workloads anywhere, whether on physical or virtual servers…even in containers
  • Operate any workload without worrying about infrastructure resources or compatibility issues
  • Ensure the infrastructure can provide the right service levels to drive positive business outcomes


The foundation of HPE’s Composable Infrastructure is the HPE Synergy 12000 frame (ten (10) rack units (RU)), which combines compute, storage, network and management into a single infrastructure. The frame’s front module bays easily accommodate and integrate a broad array of compute and storage modules. There are two (2) bays for management, with the Synergy Composer loaded with HPE OneView software to compose storage, compute and network resources in customers’ configuration of choice. And OneView templates are provided for provisioning of each of the three (3) resources (compute, storage and network), and can monitor, flag, and remediate server issues based on the profiles associated with them.

Frames can be added as workloads increase, and a pair of Synergy Composer appliances can manage, with a single management domain, up to twenty-one (21) frames.

A Unified API

The Unified API allows users, through the Synergy Composer user interface, to access all management functions. It operates at a high abstraction level and makes actions repeatable, which greatly saves time and reduces errors. And remember, a single line of code can address compute, storage and network, which greatly streamlines and accelerates provisioning, and allows DevOps teams to work and develop more rapidly.


HPE Compute modules, which come in a wide variety based on types of workloads required, create a pool of flexible capacity that can be configured to rapidly―practically instantaneously―provision the infrastructure for a broad range of applications. All compute modules deliver high levels of performance, scalability, and simplified storage and configurations.


Composable storage with HPE Synergy is agile and flexible, and offers many options that can address a variety of storage needs, such as SAS, SFF, NVMe SFF, Flash uFF, or diskless.

Network (aka Fabric)

HPE Synergy Composable Fabric simplifies network connectivity by using disaggregation to create a cost-effective, highly available and scalable architecture. It creates pools of flexible capacity that provisions rapidly to address a broad range of applications. It’s enabled by HPE Virtual Connect, and can match workload performance needs with its low latency, multi-speed architecture. This one device can converge traffic across multiple frames (creating a rack scale architecture) and directly connects to external LANs.

Talk to the experts

For more information about HPE Synergy and what it can provide to your organization, contact the talented, tenured solutions architects and engineers at GDT. They’re experienced at designing and implementing composable and hyperconverged solutions for enterprises of all sizes. You can reach them at


Composable Infrastructure and Hyperconvergence…what’s the difference?

By Richard Arneson

You can’t flip through a trade pub for more than twenty (20) seconds without reading one of these two (2) words, probably both: composable and hyperconvergence. Actually, there’s an extremely good chance you’ll see them together, considering both provide many of the same benefits to enterprise data centers. But with similarities comes confusion, leaving some to wonder when, or why, should one be used instead of the other. To add fuel to those flames of confusion, hyperconvergence and composable can, and often are, used together, even complement each other quite well. But, if nothing else, keep one (1) primary thought in mind―composable is the evolutionary next step from hyperconvergence.

In the beginning…

Hyperconvergence revolutionized data centers by providing them a cloud-like experience with an on-premises infrastructure. Since its inception approximately six (6) years ago (its precise age is up for debate), the hyperconvergence market has grown to just north of $3.5B. Hyperconvergence reduces a rack of servers down to a small, 2U appliance, combining server, software-defined storage, and virtualization. Storage is handled with software to manage storage nodes, which can be either physical or virtual servers. Each node runs virtualization software identical to other nodes, allowing for a single, virtualized storage pool comprised of the combined nodes. It’s all software-managed, and is especially handy in the event of equipment, or node, failure.

However, Hyperconvergence, for all its benefits, has one (1) primary drawback―storage and compute must be scaled together, even if one or the other doesn’t need to be scaled at that very moment. For instance, if you need to add storage, you also have to add more compute and RAM. With composable infrastructures, you can add the needed resources independently of one another. In short, hyperconvergence doesn’t address as many workloads as composable infrastructure.

…then there was composable

Whomever coined the term Composable Infrastructure is up for debate, but HPE was definitely the first to deliver it to the marketplace with its introduction of HPE Synergy in 2016. Today there are many vendors, in addition to HPE, offering composable solutions, most notably Cisco’s UCS and Dell EMC’s VxBlock. And each of these aforementioned solutions satisfies the three (3) basic goals of composable infrastructures:

  • Software-Defined intelligence
    • Creates compute, storage and network connectivity from pooled resources to deploy VMs, on-demand servers and containers.
  • Access to a fluid pool of resources
    • Resources can be sent to support needs as they arise. The pools are like additional military troops that are deployed where and when they’re needed.
  • Management through a single, unified API
    • A unified API means the deployment of infrastructure and applications is faster and far easier; code can be written once that addresses compute, storage and network. Provisioning is streamlined and designed with software intelligence in mind.

Talk to the experts

For more information about hyperconverged or composable infrastructures, contact the talented, tenured solutions architects and engineers at GDT. They’re experienced at designing and implementing hyperconverged and composable solutions for enterprises of all sizes. You can reach them at


Not in the Cloud, but in the…Fog?

By Richard Arneson

Just when everybody got comfortable bandying about the cloud, along comes another meteorology-related tech term―fog. Yes, we now have Fog Computing. While in its infancy (in fact, the OpenFog Consortium was created only three (3) short years ago), it will likely become another oft-used word in the networking vernacular.

The consortium was founded in 2015 by Cisco (which coined the term), ARM HoldingsDell EMC, IntelMicrosoft, and Princeton University, and was a response to the number and precipitous growth of IoT devices. To accommodate those growing numbers (over 9 billion currently in use, estimated to be over 21 billion by 2020), they saw the need to extend cloud computing to the edge. And as the consortium sees it, moving to the edge is best described as moving to the fog.

Fog Computing sounds suspiciously like Edge Computing

Yes, fog and edge computing sound like they’re one and the same, but they are indeed different. They both manage, store and process data at the edge, but, according to Cisco’s Helder Antunes, who is an OpenFog Consortium member, “Edge computing is a component, or a subset of Fog Computing. Think of Fog Computing as the way data is processed from where it is created to where it will be stored. Edge computing refers just to data being processed close to where it is created. Fog Computing encapsulates not just that edge processing, but also the network connections needed to bring that data from the edge to its end point.”

The benefits of Fog Computing

With Fog Computing, organizations have more options for processing data, which is beneficial for applications that require data to be processed more quickly―for instance, an IoT device that needs to respond instantaneously, or as close to that as possible.

By creating low-latency connections between devices, Fog Computing can reduce the amount of bandwidth needed when compared to having it sent to the cloud for processing. It can even be used when there’s no bandwidth connection, which, of course, means it must be processed very, very close to where it was created. And if security is a concern, which it always is, Fog Computing can be protected by virtual firewalls.

The OpenFog Consortium’s three (3) goals for Fog Computing

The OpenFog Consortium’s goal is to create for Fog Computing an open reference architecture, build test beds and operational models, define and advance the technology, educate the marketplace, and promote business development with Fog Computing. It developed and outlined three (3) goals that Fog Computing needs to address and support:

  1. Horizontal scalability, which means it should serve the needs of multiple industries.
  2. The ability to operate across the continuum that exists between IoT devices and the cloud.
  3. Serve as a system-level technology that extends IoT devices over the network edge, through to the cloud, and across an array of network protocol layers.

Before you get too comfortable using the term Fog Computing, get ready for another one that’s slowly gaining steam―Mist Computing.

For more information about Cloud, Edge, or Fog―even Mist―Computing, contact one of the tenured networking professionals at GDT. They maintain the highest certification levels in the industry, and have helped companies of all sizes, and from all industries, realize their digital transformation goals. You can reach them at They’d love to hear from you.

Intent-Based Networking (IBN) is all the buzz

By Richard Arneson

You may or may not have heard of it, but if you fall into the latter, it won’t be long until you do―probably a lot. Network management has always been associated with several words, none of them very appealing to IT professionals: manual, time-consuming and tedious. An evolution is taking place to take those three (3) elements out of network management―Intent-Based Networking, or IBN.

It’s software

Some suggest that intent-based networking isn’t a product, but a concept or philosophy. Opinions aside, its nomenclature is confusing because “intent-based networking” doesn’t include an integral word―software.

Intent-based networking removes manual, error-prone network management and replaces it with automated processes that are guided by network intelligence, machine learning and integrated security. According to several studies regarding network management, it’s estimated that anywhere from 75% to 83% of network changes are currently conducted via CLI’s (Command Line Interfaces). What this ultimately means is that CLI-driven network changes, which are made manually, are prone to mistakes, the number of which depends on the user making the changes. And resultant network downtime from those errors means headaches, angry users and, worst of all, a loss of revenue. And if revenue generation is directly dependent on the network being up, millions of dollars will be lost, even if the network is down for a short period of time.

How does IBN work?

In the case of intent-based networking, the word intent simply means what the network “intends” to accomplish. It enables users to configure how, exactly, they intend the network to behave by applying policies that, through the use of automation and machine learning, can be pushed out to the entire infrastructure.

Wait a minute, IBN sounds like SDN

If you’re thinking this, you’re not the only one. They sound very similar, what with the ease of network management, central policy setting, use of automation, cost savings and agility. And to take that a step further, IBN can use SDN controllers and even augment SDN deployments. The main difference, however, lies in the fact that IBN is concerned more with building and operating networks that satisfy intent, rather than SDN’s focus on virtualization (creating a single, virtual network by combining hardware and software resources and functionality).

IBN―Interested in What is needed?

IBN first understands what the network is intended to accomplish, then calculates exactly how to do it. With apologies to SDN, IBN is simply smarter and more sophisticated. If it sounds like IBN is the next evolution of SDN, you’re right. While the degree or level of evolution might be widely argued, it would take Clarence Darrow to make a good case against evolution altogether. (Yes, I’m aware of the irony in this statement.)

Artificial Intelligence (AI) and Machine Learning

Through advancements in AI and algorithms used in machine learning, IBN enables network administrators to define a desired state of the network (intent), then rely on the software to implement infrastructure changes, configurations and security policies that will satisfy that intent.

Elements of IBN

According to Gartner, there are four (4) elements that define intent-based networking. And if they seem a lot like SDN, you’re right again. Basically, it’s only the first element that really distinguishes IBN from SDN.

  1. Translation and Validation– The end user inputs what is needed, the network configures how it will be accomplished, and validates whether the design and related configurations will work.
  2. Automated Implementation– Through network automation and/or orchestration, the appropriate network can be configured across the entire infrastructure.
  3. Awareness of Network State– The network is monitored in real-time, and is both protocol- and vendor-agnostic.
  4. Assurance and Dynamic Optimization/Remediation– Continuous, real-time validation of the network is taken, and corrective action can be administered, such as blocking traffic, modifying network capacity, or notifying network administrators that the intent isn’t being met.

IBN―Sure, it’s esoteric, but definitely not just a lot of hype

If you have questions about intent-based networking and what it can do for your organization, contact one of the networking professionals at GDT for more information. They’ve helped companies of all sizes, and from all industries, realize their digital transformation goals. You can reach there here: They’d love to hear from you.

Open and Software-Driven―it’s in Cisco’s DNA

By Richard Arneson

Cisco’s Digital Network Architecture (DNA), announced to the marketplace approximately two (2) years ago, brings together all the elements of an organization’s digital transformation strategy: virtualization, analytics, automation, cloud and programmability. It’s an open, software-driven architecture that complements its data center-based Application-Centric Infrastructure (ACI) by extending that same policy-driven, software development approach throughout the entire network, including campuses and branches, be they wired or wireless. It’s delivered through the Cisco ONE™ Software family, which enables simplified software-based licensing and helps protect software investments.

What does all of that really mean?

With Cisco DNA, each network device is considered part of a unified fabric, which allows IT departments a simpler and more cost-effective means of really taking control of their network infrastructure. Now IT departments can react at machine speed to the quick changing of business needs, including security threats, across the entire network. Prior to Cisco DNA, reaction times relied on human-powered workflows, which ultimately meant making changes one device at a time. Now they can interact with the entire network through a single fabric, and, in the case of a cyber threat, they can address it in real-time.

With Cisco DNA, companies can address the entire network as one, single programmable platform. Ultimately, employees and customers will enjoy a highly enhanced user experience.

The latest buzz―Intent-based Networking

Cisco DNA is one of the company’s answers to the industry’s latest buzz phrase―Intent-based networking. In short, intent-based networking takes the network management of yore (manual, time-consuming and tedious) and automates those processes. It accomplishes this by taking deep intelligence and integrated security to deliver network-wide assurance.

Cisco DNA’s “five (5) Guiding Principles”:

  1. Virtualizeeverything. With Cisco DNA, companies can enjoy the freedom of choice to run any service, anywhere, and independent of underlying platforms, be they virtual, physical, on-prem or in the cloud.
  2. Automate for easy deployment, maintenance and management―a real game-changer.
  3. Provide Cloud-delivered Service Management that combines the agility of the cloud with security and the control of on-prem solutions.
  4. Make it open, extensible and programmable at every layer, with open APIs (Application Programming Interfaces) and a developer platform to support an extensive ecosystem of network-enabled applications.
  5. Deliver extensive Analytics, which provide thorough insights on the network, the IT infrastructure and the business.

Nimble, simple and network-wide―that’s GDT and Cisco DNA

If you haven’t heard of either intent-based networking or Cisco’s DNA, contact one of the networking professionals at GDT for more information. They’ve helped companies of all sizes, and from all industries, realize their digital transformation goals. You can reach them here: They’d love to hear from you.

SD-WAN: Demystifying Overlay, Underlay, Encapsulation & Network Virtualization

Following will be more details on the subject, but let’s just get this out of the way first: SD-WAN is a virtual, or overlay, network; the physical, or underlay, network is the one on which the overlay network resides. Virtual overlay networks contain nodes and links (virtual ones, of course) and allow new services to be enabled without re-configuring the entire network. They are secure and encrypted, and are independent of the underlay network, whether it’s MPLS, ATM, Wi-Fi, 4G, LTE, et al. SD-WAN is transport agnostic―no offense, but it simply doesn’t care about the means of transport you’ve selected.

While the oft-mentioned benefits of SD-WAN include cost savings, ease of management and the ability to prioritize traffic, they also provide many other less mentioned benefits, including:

  • The ability for developers to create and implement applications and protocols more easily in the cloud,
  • More flexibility for data routing through multi-path forwarding, and
  • The easy shifting of virtual machines (VMs) to different locations, but without the constraints of the physical, underlay network.

Overlay networks have been around for a while; in fact, the Internet is an overlay network that, originally, ran across the underlay Public Switched Telephone Network (PSTN). In fact, in 2018 most overlay networks, such as VoIP and VPNs, run atop the Internet.


According to Merriam-Webster, the word encapsulation means “to enclose in or as if in a capsule.” And that’s exactly what occurs in SD-WAN, except the enclosure isn’t a capsule, but a packet. The encapsulation occurs within the physical network, and once the primary packet reaches its destination, it’s opened to reveal the inner, or encapsulated, overlay network packet. If the receiver of the delivered information isn’t authenticated, they won’t be able to access it.

Network Virtualization

SD-WAN (including SDN) and Network Virtualization are often used interchangeably, but the former is really a subset of the latter. They both, through the use of software, connect virtual machines (VMs) that mimic physical hardware. And both allow IT managers to consolidate multiple physical networks, divide them into segments, and ultimately enjoy easier network management, automation, and improved speed.

Don’t leave your network to chance

WANs and LANs are the lifeblood of IT departments. If you’re considering SD-WAN and would like to enjoy the benefits it can, if deployed optimally, deliver, calling on experienced SD-WAN solutions architects and engineers should be your first order of business. Even though SD-WAN is widely touted as a simple, plug-n-play networking solution, there are many things to consider in addition to those wonderful benefits you’ve been hearing about for years. For instance, the use of multiple software layers can require more overhead, and the process of encapsulation can place additional demands on computing. Yes, there’s a lot to consider.

SD-WAN experts like those at GDT can help lead you down this critically important element of your digital transformation journey. They’ve done just that for enterprises of all size, and from a wide range of industries. You can reach their experienced SD-WAN solutions architects and engineers at They’d love to hear from you.

Dispelling myths about SD-WAN

Many of the misrepresentations of truth (OK, myths) that get bandied about regarding SD-WAN come from MPLS providers or network engineers who are happy with their current architecture and/or dread the thought of change. There’s no question, MPLS has been a great transport technology over the past fifteen (15) years or so, and its removal of Data Layer (OSI’s layer 2) dependency to provide QoS (Quality of Service) across the WAN was a considerable step up from legacy solutions, such as frame relay and ATM. And it’s still a great, and widely used, transport protocol, and can be effectively utilized with SD-WAN. So, let’s start with this first myth…

SD-WAN is a replacement for MPLS

No question, SD-WAN is perfect for replacing MPLS in certain instances, especially as it pertains to branch offices. MPLS isn’t cheap, and provisioning it at each location requires a level of on-site expertise. Now consider the associated costs and hassles when a company has hundreds of locations. However, given the stringent QoS demands that exist with many organizations, MPLS is still used to satisfy many of those, but can perfectly augment SD-WAN, as well. MPLS provides very high, and reliable, packet delivery, and many companies use it solely for traffic requiring QoS, and push everything else across the SD-WAN.

SD-WAN and WAN Optimization are the same thing

WAN Optimization was designed to address traffic traversing legacy networks, like frame relay and ATM. It was a way to squeeze the most of an existing network without having to expensively upgrade bandwidth at each site. Basically, the cost of bandwidth outgrew the need for more of it, and WAN Optimization, through caching and protocol optimization, allowed users to download cached information from a file that had already been downloaded―faster, more efficient use of bandwidth. But WAN Optimization can work in conjunction with SD-WAN, as it reduces latency across (very) long-distance WAN locations, satisfies certain QoS needs through data compression, and addresses TCP/IP protocol limitations.

SD-WAN is nothing more than a cost savings play

No question, SD-WAN is less costly than MPLS, and utilizes inexpensive, highly commoditized Internet connections. But there is a long list of reasons to utilize SD-WAN that go above and beyond savings. It’s far easier to deploy than MPLS and can be centrally-managed, which is ideal for setting policies, then pushing them out to all SD-WAN locations. SD-WAN works with the transport protocol of your choosing, whether that’s MPLS, 4G, Wi-Fi, and others. And there’s no longer a requirement to lease lines from only one (1) service provider, so customers can enjoy far greater flexibility and the ability to monitor circuits regardless of provider used.

SD-WAN requires a hybrid solution

Hybrid WANs, which utilize two (2) or more transport technologies across the WAN, are certainly not an SD-WAN requirement, but definitely work beautifully within that architecture. For instance, it’s not uncommon for organizations to utilize legacy networks for time-sensitive traffic, and SD-WAN for offloading certain applications to their corporate data center. A hybrid solution can allow for the seamless flow of traffic between locations so that, in the event one link experiences loss or latency, the other can instantly take over and meet associated SLAs.

Here’s one that’s NOT a myth: if you’d like to implement SD-WAN, you should turn to professionals who specialize in it

To enjoy all that SD-WAN offers, there are a spate of things to consider, from architectures and applications, to bandwidth requirements and traffic prioritization. SD-WAN is often referred to as a simple plug-n-play solution, but there’s more to it than meets the eye. Yes, it can be a brilliant WAN option, but not relying on experts in SD-WAN technology may soon leave you thinking, All that SD-WAN hype is just that…hype!

Working with SD-WAN experts like those at GDT can help bring the technology’s many benefits to your organization and leave you thinking, “It’s no hype…SD-WAN is awesome.” They’ve done just that for many enterprises―large, medium and small. You can reach their experienced SD-WAN solutions architects and engineers at They’d love to hear from you.