Solutions Blog

Composable Infrastructure and Hyperconvergence…what’s the difference?

By Richard Arneson

You can’t flip through a trade pub for more than twenty (20) seconds without reading one of these two (2) words, probably both: composable and hyperconvergence. Actually, there’s an extremely good chance you’ll see them together, considering both provide many of the same benefits to enterprise data centers. But with similarities comes confusion, leaving some to wonder when, or why, should one be used instead of the other. To add fuel to those flames of confusion, hyperconvergence and composable can, and often are, used together, even complement each other quite well. But, if nothing else, keep one (1) primary thought in mind―composable is the evolutionary next step from hyperconvergence.

In the beginning…

Hyperconvergence revolutionized data centers by providing them a cloud-like experience with an on-premises infrastructure. Since its inception approximately six (6) years ago (its precise age is up for debate), the hyperconvergence market has grown to just north of $3.5B. Hyperconvergence reduces a rack of servers down to a small, 2U appliance, combining server, software-defined storage, and virtualization. Storage is handled with software to manage storage nodes, which can be either physical or virtual servers. Each node runs virtualization software identical to other nodes, allowing for a single, virtualized storage pool comprised of the combined nodes. It’s all software-managed, and is especially handy in the event of equipment, or node, failure.

However, Hyperconvergence, for all its benefits, has one (1) primary drawback―storage and compute must be scaled together, even if one or the other doesn’t need to be scaled at that very moment. For instance, if you need to add storage, you also have to add more compute and RAM. With composable infrastructures, you can add the needed resources independently of one another. In short, hyperconvergence doesn’t address as many workloads as composable infrastructure.

…then there was composable

Whomever coined the term Composable Infrastructure is up for debate, but HPE was definitely the first to deliver it to the marketplace with its introduction of HPE Synergy in 2016. Today there are many vendors, in addition to HPE, offering composable solutions, most notably Cisco’s UCS and Dell EMC’s VxBlock. And each of these aforementioned solutions satisfies the three (3) basic goals of composable infrastructures:

  • Software-Defined intelligence
    • Creates compute, storage and network connectivity from pooled resources to deploy VMs, on-demand servers and containers.
  • Access to a fluid pool of resources
    • Resources can be sent to support needs as they arise. The pools are like additional military troops that are deployed where and when they’re needed.
  • Management through a single, unified API
    • A unified API means the deployment of infrastructure and applications is faster and far easier; code can be written once that addresses compute, storage and network. Provisioning is streamlined and designed with software intelligence in mind.

Talk to the experts

For more information about hyperconverged or composable infrastructures, contact the talented, tenured solutions architects and engineers at GDT. They’re experienced at designing and implementing hyperconverged and composable solutions for enterprises of all sizes. You can reach them at


Not in the Cloud, but in the…Fog?

By Richard Arneson

Just when everybody got comfortable bandying about the cloud, along comes another meteorology-related tech term―fog. Yes, we now have Fog Computing. While in its infancy (in fact, the OpenFog Consortium was created only three (3) short years ago), it will likely become another oft-used word in the networking vernacular.

The consortium was founded in 2015 by Cisco (which coined the term), ARM HoldingsDell EMC, IntelMicrosoft, and Princeton University, and was a response to the number and precipitous growth of IoT devices. To accommodate those growing numbers (over 9 billion currently in use, estimated to be over 21 billion by 2020), they saw the need to extend cloud computing to the edge. And as the consortium sees it, moving to the edge is best described as moving to the fog.

Fog Computing sounds suspiciously like Edge Computing

Yes, fog and edge computing sound like they’re one and the same, but they are indeed different. They both manage, store and process data at the edge, but, according to Cisco’s Helder Antunes, who is an OpenFog Consortium member, “Edge computing is a component, or a subset of Fog Computing. Think of Fog Computing as the way data is processed from where it is created to where it will be stored. Edge computing refers just to data being processed close to where it is created. Fog Computing encapsulates not just that edge processing, but also the network connections needed to bring that data from the edge to its end point.”

The benefits of Fog Computing

With Fog Computing, organizations have more options for processing data, which is beneficial for applications that require data to be processed more quickly―for instance, an IoT device that needs to respond instantaneously, or as close to that as possible.

By creating low-latency connections between devices, Fog Computing can reduce the amount of bandwidth needed when compared to having it sent to the cloud for processing. It can even be used when there’s no bandwidth connection, which, of course, means it must be processed very, very close to where it was created. And if security is a concern, which it always is, Fog Computing can be protected by virtual firewalls.

The OpenFog Consortium’s three (3) goals for Fog Computing

The OpenFog Consortium’s goal is to create for Fog Computing an open reference architecture, build test beds and operational models, define and advance the technology, educate the marketplace, and promote business development with Fog Computing. It developed and outlined three (3) goals that Fog Computing needs to address and support:

  1. Horizontal scalability, which means it should serve the needs of multiple industries.
  2. The ability to operate across the continuum that exists between IoT devices and the cloud.
  3. Serve as a system-level technology that extends IoT devices over the network edge, through to the cloud, and across an array of network protocol layers.

Before you get too comfortable using the term Fog Computing, get ready for another one that’s slowly gaining steam―Mist Computing.

For more information about Cloud, Edge, or Fog―even Mist―Computing, contact one of the tenured networking professionals at GDT. They maintain the highest certification levels in the industry, and have helped companies of all sizes, and from all industries, realize their digital transformation goals. You can reach them at They’d love to hear from you.

Intent-Based Networking (IBN) is all the buzz

By Richard Arneson

You may or may not have heard of it, but if you fall into the latter, it won’t be long until you do―probably a lot. Network management has always been associated with several words, none of them very appealing to IT professionals: manual, time-consuming and tedious. An evolution is taking place to take those three (3) elements out of network management―Intent-Based Networking, or IBN.

It’s software

Some suggest that intent-based networking isn’t a product, but a concept or philosophy. Opinions aside, its nomenclature is confusing because “intent-based networking” doesn’t include an integral word―software.

Intent-based networking removes manual, error-prone network management and replaces it with automated processes that are guided by network intelligence, machine learning and integrated security. According to several studies regarding network management, it’s estimated that anywhere from 75% to 83% of network changes are currently conducted via CLI’s (Command Line Interfaces). What this ultimately means is that CLI-driven network changes, which are made manually, are prone to mistakes, the number of which depends on the user making the changes. And resultant network downtime from those errors means headaches, angry users and, worst of all, a loss of revenue. And if revenue generation is directly dependent on the network being up, millions of dollars will be lost, even if the network is down for a short period of time.

How does IBN work?

In the case of intent-based networking, the word intent simply means what the network “intends” to accomplish. It enables users to configure how, exactly, they intend the network to behave by applying policies that, through the use of automation and machine learning, can be pushed out to the entire infrastructure.

Wait a minute, IBN sounds like SDN

If you’re thinking this, you’re not the only one. They sound very similar, what with the ease of network management, central policy setting, use of automation, cost savings and agility. And to take that a step further, IBN can use SDN controllers and even augment SDN deployments. The main difference, however, lies in the fact that IBN is concerned more with building and operating networks that satisfy intent, rather than SDN’s focus on virtualization (creating a single, virtual network by combining hardware and software resources and functionality).

IBN―Interested in What is needed?

IBN first understands what the network is intended to accomplish, then calculates exactly how to do it. With apologies to SDN, IBN is simply smarter and more sophisticated. If it sounds like IBN is the next evolution of SDN, you’re right. While the degree or level of evolution might be widely argued, it would take Clarence Darrow to make a good case against evolution altogether. (Yes, I’m aware of the irony in this statement.)

Artificial Intelligence (AI) and Machine Learning

Through advancements in AI and algorithms used in machine learning, IBN enables network administrators to define a desired state of the network (intent), then rely on the software to implement infrastructure changes, configurations and security policies that will satisfy that intent.

Elements of IBN

According to Gartner, there are four (4) elements that define intent-based networking. And if they seem a lot like SDN, you’re right again. Basically, it’s only the first element that really distinguishes IBN from SDN.

  1. Translation and Validation– The end user inputs what is needed, the network configures how it will be accomplished, and validates whether the design and related configurations will work.
  2. Automated Implementation– Through network automation and/or orchestration, the appropriate network can be configured across the entire infrastructure.
  3. Awareness of Network State– The network is monitored in real-time, and is both protocol- and vendor-agnostic.
  4. Assurance and Dynamic Optimization/Remediation– Continuous, real-time validation of the network is taken, and corrective action can be administered, such as blocking traffic, modifying network capacity, or notifying network administrators that the intent isn’t being met.

IBN―Sure, it’s esoteric, but definitely not just a lot of hype

If you have questions about intent-based networking and what it can do for your organization, contact one of the networking professionals at GDT for more information. They’ve helped companies of all sizes, and from all industries, realize their digital transformation goals. You can reach there here: They’d love to hear from you.

Open and Software-Driven―it’s in Cisco’s DNA

By Richard Arneson

Cisco’s Digital Network Architecture (DNA), announced to the marketplace approximately two (2) years ago, brings together all the elements of an organization’s digital transformation strategy: virtualization, analytics, automation, cloud and programmability. It’s an open, software-driven architecture that complements its data center-based Application-Centric Infrastructure (ACI) by extending that same policy-driven, software development approach throughout the entire network, including campuses and branches, be they wired or wireless. It’s delivered through the Cisco ONE™ Software family, which enables simplified software-based licensing and helps protect software investments.

What does all of that really mean?

With Cisco DNA, each network device is considered part of a unified fabric, which allows IT departments a simpler and more cost-effective means of really taking control of their network infrastructure. Now IT departments can react at machine speed to the quick changing of business needs, including security threats, across the entire network. Prior to Cisco DNA, reaction times relied on human-powered workflows, which ultimately meant making changes one device at a time. Now they can interact with the entire network through a single fabric, and, in the case of a cyber threat, they can address it in real-time.

With Cisco DNA, companies can address the entire network as one, single programmable platform. Ultimately, employees and customers will enjoy a highly enhanced user experience.

The latest buzz―Intent-based Networking

Cisco DNA is one of the company’s answers to the industry’s latest buzz phrase―Intent-based networking. In short, intent-based networking takes the network management of yore (manual, time-consuming and tedious) and automates those processes. It accomplishes this by taking deep intelligence and integrated security to deliver network-wide assurance.

Cisco DNA’s “five (5) Guiding Principles”:

  1. Virtualizeeverything. With Cisco DNA, companies can enjoy the freedom of choice to run any service, anywhere, and independent of underlying platforms, be they virtual, physical, on-prem or in the cloud.
  2. Automate for easy deployment, maintenance and management―a real game-changer.
  3. Provide Cloud-delivered Service Management that combines the agility of the cloud with security and the control of on-prem solutions.
  4. Make it open, extensible and programmable at every layer, with open APIs (Application Programming Interfaces) and a developer platform to support an extensive ecosystem of network-enabled applications.
  5. Deliver extensive Analytics, which provide thorough insights on the network, the IT infrastructure and the business.

Nimble, simple and network-wide―that’s GDT and Cisco DNA

If you haven’t heard of either intent-based networking or Cisco’s DNA, contact one of the networking professionals at GDT for more information. They’ve helped companies of all sizes, and from all industries, realize their digital transformation goals. You can reach them here: They’d love to hear from you.

SD-WAN: Demystifying Overlay, Underlay, Encapsulation & Network Virtualization

Following will be more details on the subject, but let’s just get this out of the way first: SD-WAN is a virtual, or overlay, network; the physical, or underlay, network is the one on which the overlay network resides. Virtual overlay networks contain nodes and links (virtual ones, of course) and allow new services to be enabled without re-configuring the entire network. They are secure and encrypted, and are independent of the underlay network, whether it’s MPLS, ATM, Wi-Fi, 4G, LTE, et al. SD-WAN is transport agnostic―no offense, but it simply doesn’t care about the means of transport you’ve selected.

While the oft-mentioned benefits of SD-WAN include cost savings, ease of management and the ability to prioritize traffic, they also provide many other less mentioned benefits, including:

  • The ability for developers to create and implement applications and protocols more easily in the cloud,
  • More flexibility for data routing through multi-path forwarding, and
  • The easy shifting of virtual machines (VMs) to different locations, but without the constraints of the physical, underlay network.

Overlay networks have been around for a while; in fact, the Internet is an overlay network that, originally, ran across the underlay Public Switched Telephone Network (PSTN). In fact, in 2018 most overlay networks, such as VoIP and VPNs, run atop the Internet.


According to Merriam-Webster, the word encapsulation means “to enclose in or as if in a capsule.” And that’s exactly what occurs in SD-WAN, except the enclosure isn’t a capsule, but a packet. The encapsulation occurs within the physical network, and once the primary packet reaches its destination, it’s opened to reveal the inner, or encapsulated, overlay network packet. If the receiver of the delivered information isn’t authenticated, they won’t be able to access it.

Network Virtualization

SD-WAN (including SDN) and Network Virtualization are often used interchangeably, but the former is really a subset of the latter. They both, through the use of software, connect virtual machines (VMs) that mimic physical hardware. And both allow IT managers to consolidate multiple physical networks, divide them into segments, and ultimately enjoy easier network management, automation, and improved speed.

Don’t leave your network to chance

WANs and LANs are the lifeblood of IT departments. If you’re considering SD-WAN and would like to enjoy the benefits it can, if deployed optimally, deliver, calling on experienced SD-WAN solutions architects and engineers should be your first order of business. Even though SD-WAN is widely touted as a simple, plug-n-play networking solution, there are many things to consider in addition to those wonderful benefits you’ve been hearing about for years. For instance, the use of multiple software layers can require more overhead, and the process of encapsulation can place additional demands on computing. Yes, there’s a lot to consider.

SD-WAN experts like those at GDT can help lead you down this critically important element of your digital transformation journey. They’ve done just that for enterprises of all size, and from a wide range of industries. You can reach their experienced SD-WAN solutions architects and engineers at They’d love to hear from you.

Dispelling myths about SD-WAN

Many of the misrepresentations of truth (OK, myths) that get bandied about regarding SD-WAN come from MPLS providers or network engineers who are happy with their current architecture and/or dread the thought of change. There’s no question, MPLS has been a great transport technology over the past fifteen (15) years or so, and its removal of Data Layer (OSI’s layer 2) dependency to provide QoS (Quality of Service) across the WAN was a considerable step up from legacy solutions, such as frame relay and ATM. And it’s still a great, and widely used, transport protocol, and can be effectively utilized with SD-WAN. So, let’s start with this first myth…

SD-WAN is a replacement for MPLS

No question, SD-WAN is perfect for replacing MPLS in certain instances, especially as it pertains to branch offices. MPLS isn’t cheap, and provisioning it at each location requires a level of on-site expertise. Now consider the associated costs and hassles when a company has hundreds of locations. However, given the stringent QoS demands that exist with many organizations, MPLS is still used to satisfy many of those, but can perfectly augment SD-WAN, as well. MPLS provides very high, and reliable, packet delivery, and many companies use it solely for traffic requiring QoS, and push everything else across the SD-WAN.

SD-WAN and WAN Optimization are the same thing

WAN Optimization was designed to address traffic traversing legacy networks, like frame relay and ATM. It was a way to squeeze the most of an existing network without having to expensively upgrade bandwidth at each site. Basically, the cost of bandwidth outgrew the need for more of it, and WAN Optimization, through caching and protocol optimization, allowed users to download cached information from a file that had already been downloaded―faster, more efficient use of bandwidth. But WAN Optimization can work in conjunction with SD-WAN, as it reduces latency across (very) long-distance WAN locations, satisfies certain QoS needs through data compression, and addresses TCP/IP protocol limitations.

SD-WAN is nothing more than a cost savings play

No question, SD-WAN is less costly than MPLS, and utilizes inexpensive, highly commoditized Internet connections. But there is a long list of reasons to utilize SD-WAN that go above and beyond savings. It’s far easier to deploy than MPLS and can be centrally-managed, which is ideal for setting policies, then pushing them out to all SD-WAN locations. SD-WAN works with the transport protocol of your choosing, whether that’s MPLS, 4G, Wi-Fi, and others. And there’s no longer a requirement to lease lines from only one (1) service provider, so customers can enjoy far greater flexibility and the ability to monitor circuits regardless of provider used.

SD-WAN requires a hybrid solution

Hybrid WANs, which utilize two (2) or more transport technologies across the WAN, are certainly not an SD-WAN requirement, but definitely work beautifully within that architecture. For instance, it’s not uncommon for organizations to utilize legacy networks for time-sensitive traffic, and SD-WAN for offloading certain applications to their corporate data center. A hybrid solution can allow for the seamless flow of traffic between locations so that, in the event one link experiences loss or latency, the other can instantly take over and meet associated SLAs.

Here’s one that’s NOT a myth: if you’d like to implement SD-WAN, you should turn to professionals who specialize in it

To enjoy all that SD-WAN offers, there are a spate of things to consider, from architectures and applications, to bandwidth requirements and traffic prioritization. SD-WAN is often referred to as a simple plug-n-play solution, but there’s more to it than meets the eye. Yes, it can be a brilliant WAN option, but not relying on experts in SD-WAN technology may soon leave you thinking, All that SD-WAN hype is just that…hype!

Working with SD-WAN experts like those at GDT can help bring the technology’s many benefits to your organization and leave you thinking, “It’s no hype…SD-WAN is awesome.” They’ve done just that for many enterprises―large, medium and small. You can reach their experienced SD-WAN solutions architects and engineers at They’d love to hear from you.

Flexible deployment to match unique architectural needs

In late 2017, tech giant VMware purchased VeloCloud, which further strengthened and enhanced its market-leading position transitioning enterprises to a more software-defined future. The acquisition greatly built on the success of its leading VMware NSX virtualization platform, and expanded its portfolio to address branch transformation, security, end-to-end automation and application continuity from the data center to cloud edge.

Referred to as NSX SD-WAN, VeloCloud’s solution allows for flexible deployment and secure connectivity that easily scales to meet the demands of enterprises of all sizes―and they know about “all sizes.” VMware provides compute, mobility, cloud networking and security offerings to over 500,000 customers throughout the world.

NSX SD-WAN satisfies the following key WAN needs:


From a central location, through a single pane-of-glass, enterprises of all sizes can build out branches in―literally―a matter of minutes, and set policies that are automatically pushed out to branch SD-WAN routers. Save the costs of sending out a CCIE to the branch office Timbuktu or Bugtussle, and use the savings on other initiatives.


With cloud applications, BYOD, and the need to utilize the cellular or broadband transport of users’ choosing, security is, as well it should be, of the utmost importance. The robust NSX SD-WAN architecture secures data and traffic through a secure overlay of the type of transport, regardless of the service provider. Best of all, it returns the ability to manage security, control and compliance from a central location.

Bandwidth Demands

With the growing―and growing―use of cloud applications, the need to utilize less expensive bandwidth is critically important. NSX SD-WAN can aggregate circuits to offer more bandwidth and deliver optimal cloud application performance.

Cloud Applications

If your employees aren’t currently spending an inordinate amount of time in the cloud, they will be. NSX SD-WAN provides direct access to the cloud, bypassing the need by MPLS networks to first backhaul traffic to a data center, then to the cloud. With that comes latency and a less than satisfying cloud experience.

NSX SD-WAN―Architecture friendly

When you’ve got over a half million customers around the world, it’s imperative to provide a solution that takes into account the many architectures that have been deployed. Regardless of the type of SD-WAN required―whether Internet-only or a Hybrid solution utilizing an existing MPLS network―NSX SD-WAN can satisfy the need.

GDT’s team of expert SD-WAN solutions architects and engineers have implemented SD-WANs for some of the largest enterprises and service providers in the world. For more information about what SD-WAN can provide for your organization, contact them at They’d love to hear from you.


How Companies are Benefiting from IT Staff Augmentation

By Richard Arneson

Companies have been augmenting their IT departments for years with professionals who can step in and make an immediate impact by utilizing their skill sets and empirical expertise. And it’s not limited to engineers or solutions architects. Project managers, high-level consultants, security analysts, DevOps professionals, cabling experts…the list is only limited by what falls within the purview of IT departments. It’s the perfect solution when a project or initiative has a finite timeline and requires a very particular level of expertise. And it can address a host of other benefits, as well, by providing:

Greater Flexibility

Change and evolving business needs go hand-in-hand with information technology. Now more than ever, IT departments are tasked with the need to create more agile, cutting edge business solutions, and their need to quickly adapt can be easily be a make-it-or-break-it proposition for companies. You might not have the time or money to quickly find those individuals who can help expedite your company’s competitive advantage(s) in the marketplace.

Cost Effectiveness

Bringing an IT professional onboard full-time to focus on a particular project can be cost prohibitive if you’re left wondering how they can be utilized once the project is completed. And, of course, there are the costs of benefits to consider, as well. According to the U.S. Department of Labor, benefits are worth about 30% of compensation packages.

Reduced Risk and More Control

Augmenting IT staff, rather than outsourcing an entire project, can not only help ensure the right skill sets are being utilized, but risk can be mitigated by maintaining oversight and control in-house.

Quicker, Easier Access to the right IT pro’s

Thankfully unemployment is lower than it’s been in years, and in the IT industry it’s less than half the national average. So quickly finding the right person with the perfect skill sets can seem harder than finding a needle in a haystack. Companies’ recruiting efforts don’t focus exclusively on IT; they’re filling jobs in finance, marketing, HR, manufacturing, et al. Turning to IT staff augmentation experts who maintain large networks of professionals can uncover the right personnel quickly.

An answer to Attrition

Remember that low jobless rate in the IT sector? Sure, it’s great news, but it also means there’s a lot of competition for the right resources. There will be attrition―it’s a given. And utilizing staff augmentation can help combat that by placing individuals on specific projects and initiatives for a designated period of time.

Call on the Experts

If you have questions about augmenting your IT staff with the best and brightest the industry has to offer, contact the GDT Staffing Services professionals at Some of the largest, most noteworthy companies in the world have turned to GDT so key initiatives can be matched with IT professionals who can help drive those projects to completion. They possess years of IT experience and expertise, and maintain a vast network of IT professionals who maintain the highest levels of certification in the industry. They’d love to hear from you.

The Plane Truth about SD-WAN

You can’t get more than a few words into any article, blog or brochure about SD-WAN without reading how the control and data planes are separated. For many, this might fall under the As long as it works, I don’t really care about it heading. And that’s evident based on a lot of the writing on the subject―it’s mentioned, but that’s about as far as the explanation goes. But the uncoupling of the control and data plane in SD-WAN is a fairly straightforward, easy to understand concept.

Control Plane comes first…

Often regarded as the brains of the network, the control plane is what controls the forwarding of information within the network. It controls routing protocols, load balancing, firewall configurations, et al., and determines the route data will take across the network.

…then Data Plane

The data plane forwards the traffic based on information it receives from the control plane. Think UPS. The control plane is dispatch telling the truck(s) where to go and exactly how to get there; the truck delivering the item(s) is the data plane.

So why is separating the control plane and data plane in SD-WAN a good thing?

In traditional WAN hardware, such as routers and switches, both the control plane and data plane are embedded into the equipment’s firmware. Setting up, or making changes to, a new location requires that the hardware be accessed and manually configured (see Cumbersome, Slow, Complicated). With SD-WAN, the de-coupled control plane is imbedded in software, so network management is far simpler and can be overseen and handled from a central location.

Here are a few more benefits that SD-WAN users are enjoying as a result of the separation of the Control and Data Planes:

  • Easier deployment; SD-WAN routers, once connected, are automatically authenticated and receive configuration information.
  • Real-time optimal traffic path detection and routing.
  • Traffic that’s sent directly to a cloud services provider, such as AWS or Azure, and not backhauled to a data center first, only then to be handed off to the Internet.
  • A significant reduction in bandwidth costs when compared to MPLS.
  • Network policies that no longer have to be set for each piece of equipment, but can be created once and pushed out to the entire network.
  • Greatly reduced provisioning time; a secondary Internet circuit is all that’s needed, so weeks spent awaiting the delivery of a new WAN circuit from a service provider is a thing of the past.
  • A Reduction of costs, headaches and hassles thanks to SD-WAN’s agnostic approach to access type and/or service provider.

Call on the SD-WAN experts

Enterprises and service providers are turning to SD-WAN for these, and many other, reasons, but there are a lot of architectures (overlay, in-net, hybrid) and SD-WAN providers from which to choose. And, like anything else regarding the health and well-being of your network, due diligence is of the utmost importance. That’s why enlisting the help and support of SD-WAN solutions architects and engineers will help ensure that you’ll be able to enjoy the most that SD-WAN can offer.

To find out more about SD-WAN and the many benefits it can provide your organization, contact GDT’s tenured SD-WAN engineers and solutions architects at They’ve implemented SD-WAN solutions for some of the largest enterprise networks and service providers in the world. They’d love to hear from you.

Cisco’s Power of v

In April of 2017, Cisco put both feet into the SD-WAN waters with their purchase of San Jose, Ca.-based Viptela, a privately held SD-WAN company. One of the biggest reasons for the acquisition was its ability to easily integrate Viptela software into Cisco’s platforms. Prior to the acquisition, Cisco’s SD-WAN solution utilized its own IWAN software, which delivered a somewhat complex, unwieldy option. The merger of IWAN and Viptela formed what is now called, not surprisingly, Cisco SD-WAN.

Questions concerning the agility and effectiveness of Cisco SD-WAN can best be answered from the following quote published by Cisco customer Agilent Technologies, a manufacturer of laboratory instruments:

“Agilent’s global rollout of Cisco SD-WAN enables our IT teams to respond rapidly to changing business requirements. We now achieve more than 80% improvement in turnaround times for new capability and a significant increase in application reliability and user experience.”

The following four (4) “v” components are what comprise Cisco’s innovative SD-WAN solution.

Controller (vSmart)

What separates SD-WAN from those WAN technologies of the past is its decoupling of the Data Plane, which carries the traffic, from the Control Plane, which directs it. With decoupling, the controls are no longer maintained in equipment’s firmware, but in software that can be centrally managed. Cisco’s SD-WAN controller is called vSmart, which is cloud-based and uses Overlay Management Protocol (OMP) to manage control and data policies.

vEdge routers

Cisco’s SD-WAN routers are called vEdge, and receive data and control policies from the vSmart controller. They can establish secure IPSec tunnels between other vEdge routers, and can be either on-prem or installed on private or public clouds. They can run  traditional routing protocols, such as OSPF or BGP, to satisfy LAN needs on one side, WAN on the other.

vBond―the glue that holds it together

vBond is what connects and creates those secure IPSec tunnels between vEdge routers, after which key intel, such as IP addressing, is communicated to vSmart and vManage.


Managing the WAN traffic from a centralized location is what makes SD-WAN, well…SD-WAN. vManage provides that dashboard through a fully manageable, graphical interface from which policies and communications rules can be monitored and managed for the entire network. Different topologies can be designed and implemented through vManage, whether it’s hub and spoke, spoke to spoke, or to address specific needs to accommodate different access types.

To enjoy the Power of v, contact the experts at GDT

GDT has been a preferred Cisco partner for over 20 years, and its expert SD-WAN solutions architects and engineers have implemented SD-WANs for some of the largest enterprises and service providers in the world. Contact them at They’d love to hear from you.