Press & News

GDT achieves highest sales of Cisco products and services in its 20+ year history

Dallas, TX – GDT, a leading IT integrator and data center solutions provider, announced today that it achieved record sales of Cisco products and services for Cisco’s fiscal year 2018, which ended on July 31st. Cisco was the very first partner of GDT, which was started in 1996 by founder and owner J.W. Roberts.

“Our long-term partnership with Cisco is one of the key components that has helped build GDT into the company it is today,” said Roberts. “These record revenue numbers are testament to our strong Cisco relationship, our unwavering belief in their superior products and services, and our ongoing commitment to deliver best-of-breed solutions to GDT customers.”

GDT’s YTD 2018 growth has been due in part to tremendous sales increases in several key areas, including service provider, software, collaboration, enterprise networking and security. At a time when the IT industry is experiencing overall growth of less than 5 percent, GDT’s double-digit growth of Cisco products and software speaks volumes to its commitment to help customers achieve their digital transformation goals.

About GDT

Headquartered in Dallas, TX and with approximately 700 employees, GDT is a global IT integrator and solutions provider approaching $1 Billion in annual revenue. GDT aligns itself with industry leaders, providing the design, build, delivery and management of IT solutions and services. GDT specializes in the consulting, designing, deploying, and managing of advanced technology solutions for businesses, service providers, government, and healthcare. The GDT team of expert architects and engineers maintain the highest level of certifications to translate the latest ideas and technologies into innovative solutions that realize the vision of business leaders.

           

GDT achieves Advanced-Level AWS Partner Network (APN) status

Dallas, TX – GDT, a leading IT integrator and data center solutions provider, announced today that it achieved Advanced Level status within the elite AWS (Amazon Web Services) Partner Network (APN), and has also been awarded entry into the AWS Public Sector Partner Program. Advancement within APN is based on revenue generation, commitment to training, and the number and quality of customer engagements.

“Our partnership with AWS has been a very rewarding experience for GDT on a number of levels,” said Vinod Muthuswamy, GDT President. “Our ongoing commitment leading enterprise and public-sector customers on their digital transformation journey has been greatly enhanced by our close partnership with AWS. We are eagerly anticipating continued success in the future.”

The APN Consulting Partners Program is reserved for professional services firms that help customers design, build, migrate and manage their applications and workloads on AWS. APN Consulting Partners include Network System Integrators, Managed Service Providers (MSPs) and Value-Added Resellers (VARs), and are provided access to a range of resources that ultimately help their customers better deploy, run and manage applications in the AWS Cloud.

About GDT

Headquartered in Dallas, TX and with approximately 700 employees, GDT is a global IT integrator and solutions provider approaching $1 Billion in annual revenue. GDT aligns itself with industry leaders, providing the design, build, delivery and management of IT solutions and services. GDT specializes in the consulting, designing, deploying, and managing of advanced technology solutions for businesses, service providers, government, and healthcare. The GDT team of expert architects and engineers maintain the highest level of certifications to translate the latest ideas and technologies into innovative solutions that realize the vision of business leaders.

Dallas Technology Integrator GDT Names Troy Steele Director of Staffing Services

Dallas, TX – Dallas-based technology and systems integrator GDT today announced that Troy Steele has been named Director of Staffing Services, effectively immediately. In his new role, Steele will oversee and direct GDT’s staff augmentation practice, which has a 20-year track record helping customers improve operational efficiencies, reduce costs and drive key initiatives through the placement of IT professionals with the right skillsets.

Steele has spent the past twelve (12) years in the staffing industry, and has a proven track record building highly profitable staffing organization by understanding clients’ specific needs, corporate philosophies and organizational nuances.

“We’re excited to welcome Troy to GDT,” said Meg Gordon, GDT’s Vice President of Service Operations. “His experience and expertise building successful staffing organizations will greatly enhance our focus on growing GDT’s staff augmentation practice by continuing to provide the perfect candidates to fill customers’ IT staffing needs and requirements.”

Prior to joining GDT, Steele held several executive staffing positions, most recently with Beacon Hill Staffing, where he spent eight (8) years leading technical recruiting teams throughout Texas. Steele has a Bachelor of Arts in Communications from Southern Illinois University in Edwardsville, Illinois.

About GDT

Founded in 1996, GDT is an award-winning, international multi-vendor IT solutions provider and maintains high-level partner status with several of the world’s leading IT solutions and hardware providers, including HPE, Cisco and Dell EMC. GDT specializes in the consulting, designing, deploying, and managing of advanced technology solutions for businesses, service providers, government, and healthcare. The GDT team of expert architects and engineers maintain the highest level of certifications to translate the latest ideas and technologies into innovative solutions that realize the vision of business leaders.

GDT honored as one of the top technology integrators in CRN’s 2018 Solutions Provider 500 List

Dallas, TX – GDT, a leading IT integrator and data center solutions provider, announced today that CRN®, a brand of The Channel Company, has named GDT as one of the top 50 technology integrators in its 2018 Solution Provider 500 List. The Solution Provider 500 is CRN’s annual ranking by revenue of the largest technology integrators, solution providers and IT consultants in North America.

“GDT is very proud to have earned our high ranking on CRN’s 2018 Solutions Provider 500 List,” said GDT President Vinod Muthuswamy. “It’s humbling to be listed with so many highly touted and respected companies, and our inclusion is further proof of our steadfast commitment to delivering digital transformation solutions for our customers.”

CRN has been providing The Solution Provider 500 List since 1995, and is the predominant channel partner award list in the industry. It highlights those IT channel partner organizations that have earned the most revenue in 2018, and is a valuable resource utilized by vendors looking for top solution providers with which to partner. This year’s list is comprised of companies that have a combined revenue of over $320 billion.

The complete 2018 Solution Provider 500 list is available online at www.crn.com/sp500. The complete list is published on CRN.com, and is available to technology vendors seeking out the top solution providers with which to work.

About GDT

Headquartered in Dallas, TX and with approximately 700 employees, GDT is a global IT integrator and solutions provider approaching $1 Billion in annual revenue. GDT aligns itself with industry leaders, providing the design, build, delivery and management of IT solutions and services. GDT specializes in the consulting, designing, deploying, and managing of advanced technology solutions for businesses, service providers, government, and healthcare. The GDT team of expert architects and engineers maintain the highest level of certifications to translate the latest ideas and technologies into innovative solutions that realize the vision of business leaders.

 

Dallas Technology Integrator GDT Names Adnan Khan Director of Cloud and DevOps

Dallas, TX – Dallas-based technology and systems integrator GDT today announced that Adnan Khan has been named Director of Hybrid Cloud and DevOps, effectively immediately. In his new role, Khan will provide technical leadership to the architecture, design and management of GDT’s software development practice, and expand on its many cloud-related initiatives.

Khan has extensive, hands-on software development leadership experience utilizing lean practices, such as Agile/Scrum. With over 15 years of experience working in high performance distributed practices, Khan is particularly skilled in the following IT technologies: Wireless WAN (CDMA and GSM), Storage Area Networking (SAN), Network Attached Storage (NAS), Android- based applications, Location-based services, Cloud Computing, SaaS, Blockchain, Cryptocurrency and the Internet of Things (IoT) for both consumers and the enterprise market.

“We’re excited to welcome Adnan to GDT’s team of talented, forward-thinking IT engineers and professionals,” said Brad Davenport, GDT Vice President of Solutions Engineering. “We know his tremendous experience, wide-ranging technological expertise and unique skillsets will prove invaluable to GDT.”

Prior to joining GDT, Khan held several senior-level management positions in the IT industry, and has overseen many on- and offshore teams that consistently delivered complex software solutions, from inception to deployment. Many of those solutions are currently being used by millions of customers of some of the most noteworthy wireless carriers in the world.

Khan holds a MBA from the University of California at Irvine’s Paul Merage School of business, and a Master’s Degree in Computer Science from Pakistan’s Karachi University. In addition, Khan holds several IT-related patents.

About GDT

Founded in 1996, GDT is an award-winning, international multi-vendor IT solutions provider and maintains high-level partner status with several of the world’s leading IT solutions and hardware providers, including HPE, Cisco and Dell EMC. GDT specializes in the consulting, designing, deploying, and managing of advanced technology solutions for businesses, service providers, government, and healthcare. The GDT team of expert architects and engineers maintain the highest level of certifications to translate the latest ideas and technologies into innovative solutions that realize the vision of business leaders.

GDT and CloudFabrix to Jointly Offer NextGen IT Transformation Services

Dallas, TX – GDT, a leading IT integrator and data center solutions provider, and CloudFabrix, an AIOps Software vendor, have joined forces to accelerate the IT transformation journey for customers with next generation managed services built on the CloudFabrix cfxDimensions AIOps platform. As a result, GDT will enhance its current managed services offerings, which include cloud, hybrid IT, IoT and customized DevOps solutions. Ideal for VARs and MSPs, the CloudFabrix AIOps platform provides product and services suites for enterprise customers and MSPs, and offers a wide array of foundational capabilities, including any-time, any-source Data Ingestion, Dynamic Asset Discovery, Advanced Analytics, Machine Learning and Blockchain, among others.

The CloudFabrix AlOps platform, which addresses cloud, security and architectural needs, also provides implementation services and enterprise support to VARs and MSPs, all of which greatly reduces partners’ time to value (TtV). Now GDT, when combined with its tremendous engineering skillsets and vast experience providing managed services to customers of all sizes and from a wide range of industries, will be able to further enhance what it’s provided to customers for over 20 years―the delivery of highly innovative IT solutions with a customer-first focus.

“CloudFabrix has already enabled GDT to address many of the architectural and security needs of our customers,” said GDT President Vinod Muthuswamy. “And that, combined with our experience delivering managed services, cloud, hybrid IT, IoT and customized DevOps solutions to customers, will accelerate and improve upon our ability to provide innovative technological solutions that ultimately help customers work on the projects that will help shape their organization’s future.”

Said CloudFabrix Chief Revenue Officer Kishan Bulusu, “We are excited about working closely with GDT, a network integrator that’s made a tremendous name for itself in the managed services, cloud and hybrid IT space. The initiatives we’ve developed at an organic level will not only enhance GDT’s service offerings, but better serve the MSP community at large. Partnering with GDT will also help CloudFabrix enhance our product and platform offerings, and allow us to focus on NextGen technological and architectural capabilities. This will ultimately help CloudFabrix better address and serve the unique needs of our partners’ customers.”

About GDT

Headquartered in Dallas, TX and with approximately 700 employees, GDT is a global IT integrator and solutions provider approaching $1 Billion in annual revenue. GDT aligns itself with industry leaders, providing the design, build, delivery and management of IT solutions and services. GDT specializes in the consulting, designing, deploying, and managing of advanced technology solutions for businesses, service providers, government, and healthcare. The GDT team of expert architects and engineers maintain the highest level of certifications to translate the latest ideas and technologies into innovative solutions that realize the vision of business leaders.

 About CloudFabrix

CloudFabrix enables Responsive & Business aligned IT by making your IT more agile, efficient and analytics driven. CloudFabrix accelerates enterprises to holistically develop, modernize and govern IT processes, applications and operations to meet business outcomes in a consistent and automated manner. CloudFabrix AIops Platform simplifies and unifies IT operations and governance of both traditional and modern applications across multi-cloud environments. CloudFabrix accelerates enterprise’s cloud native journey by providing many built-in foundational services and turnkey operational capabilities. CloudFabrix is headquartered in Pleasanton, CA.

GDT Wins VMware 2017 Regional Partner Innovation Award

Partners Awarded for Extraordinary Performance and Notable Achievements

GDT today announced that it has received the Americas VMware Partner Innovation Award for the Transform Networking & Security category. GDT was recognized at VMware Partner Leadership Summit 2018, held in Scottsdale, AZ.

“We congratulate GDT on winning a VMware Partner Innovation Award for the Transform Networking & Security category, and look forward to our continued collaboration and innovation,” said Frank Rauch, vice president, Americas Partner Organization, VMware. “VMware and our partners will continue to empower organizations of all sizes with technologies that enable digital transformation.”

GDT President Vinod Muthuswamy said, “GDT is honored to have received the Americas VMware Partner Innovation Award in the Networking & Security category. It’s humbling to know our innovation and focus in network and security transformation is being recognized by leaders like VMware. Our close partnership with VMware is greatly enabling our customers to realize their Hybrid IT and digital transformation vision and goals.”

Recipients of an Americas VMware Partner Innovation Award were acknowledged in 14 categories for their outstanding performance and distinctive achievements during 2017.

Americas Partner of the Year Award categories included:

  • Cloud Provider
  • Emerging Markets Distributor
  • Empower the Digital Workspace
  • Integrate Public Clouds
  • Marketing
  • Modernize Data Centers
  • OEM
  • Professional Services
  • Regional Distributor
  • Regional Emerging Markets Partner
  • Solution Provider
  • Transform Networking & Security
  • Transformational Solution Provider
  • Technology

About VMware Partner Leadership Summit 2018

VMware Partner Leadership Summit 2018 offered VMware partners the opportunity to engage with VMware executives and industry peers to explore business opportunities, customer use cases, solution practices, and partnering best practices. As an invitation-only event, it provided partners with resources to develop and execute comprehensive go-to-market plans.  VMware Partner Leadership 2018 Summit concluded with award ceremonies recognizing outstanding achievements in the VMware partner ecosystem.

About GDTHeadquartered in Dallas, TX with approximately 700 employees, GDT is a global IT integrator and solutions provider approaching $1 Billion in annual revenue. GDT aligns itself with industry leaders, providing the design, build, delivery and management of IT solutions and services. GDT specializes in the consulting, designing, deploying, and managing of advanced technology solutions for businesses, service providers, government, and healthcare. The GDT team of expert architects and engineers maintain the highest level of certifications to translate the latest ideas and technologies into innovative solutions that realize the vision of business leaders.

# # #

VMware is a registered trademark of VMware, Inc. in the United States and other jurisdictions.

 

Dallas Network Integrator GDT’s Spring Fling Bar-B-Que Results in $10,000 Donation to New Horizons of North Texas

Dallas, TX – Dallas-based technology and systems integrator GDT announced at its Annual Spring Fling Bar-B-Que, May 3rd and 4th, that New Horizons of North Texas will receive this year’s $10,000 winner’s donation.

GDT’s Annual Spring Fling Bar-B-Que was started in 2014 by GDT CEO J.W. Roberts to further the company’s fun atmosphere while benefiting local charities. The event pits ten (10) GDT Account Executives competing against each other to determine who can smoke the best brisket and ribs. Each cross-departmental team was comprised of GDT technology partners, including Cisco, HPE, Dell EMC, Pure Networks, VMware, Veeam, Juniper Networks, Hypercore Networks, Cohesity, QTS, APS, Jive Communications and Global Knowledge.

The Spring Fling Bar-B-Que is centered around a 19-hour, highly competitive cooking event, featuring state-of-the-art smokers, secretive, pre-event meetings, and closely guarded recipes. It’s a great event full of food and fun, and provides the perfect environment for camaraderie and relationship building for the over 300 GDT employees in Dallas. And, of course, a winner is awarded who unveils their selected charity to receive the $10,000 donation. GDT Account Executive Chris Bedford, who captained the winning team, selected New Horizons of North Texas.

Said Bedford, a 20-year GDT veteran, “Our annual Spring Fling Bar-B-Que is one of the many marquee―and outrageously fun―events our marketing team produces each year, but being able to donate $10,000 to a great organization like New Horizons of North Texas makes it even more special.”

GDT’s Annual Spring Fling and Bar-B-Que is one of many examples of the company’s work hard, play hard philosophy and its ongoing commitment to giving back to the D/FW community.

About New Horizons of North Texas

New Horizons is a faith-based 501(c)(3) nonprofit dedicated to serving at-risk youth growing up in situations of poverty and academic struggle. The mission of New Horizons of North Texas is to empower at-risk youth to reach their full potential with tutoring, mentoring, and faith-building. Hew Horizons works with a highly relational, individualized, and long-term approach to provide support for elementary students all the way through their high school graduation, while providing over 250 hours of mentorship to each child each year. Visit www.newhorizonsofntx.org to learn more about New Horizons.

About GDT

Founded in 1996, GDT is an award-winning, international multi-vendor IT solutions provider and maintains high-level partner status with several of the world’s leading IT solutions and hardware providers, including HPE, Cisco and Dell EMC. GDT specializes in the consulting, designing, deploying, and managing of advanced technology solutions for businesses, service providers, government, and healthcare. The GDT team of expert architects and engineers maintain the highest level of certifications to translate the latest ideas and technologies into innovative solutions that realize the vision of business leaders.

What exactly is a Network Appliance?

We work in an industry rife with nomenclature issues. For instance, Hybrid IT is often used interchangeably with Hybrid Cloud―it shouldn’t, they’re different. They were even referred to as such in an “also known as” manner within a beautiful, 4-color brochure produced by one of the leading equipment vendors in the IT industry. I’ve seen hyperconverged substituted for converged, SAN confused with NAS, SDN and SD-WAN listed as equivalents. The list is seemingly endless.

The good news? Getting the answer is pretty easy, and only a few clicks away. Yes, Google is, for most, the answer to getting correct answers. Ask it a question, then read through the spate of corresponding articles from reputable sources, and you can generally deduce the right answer. When ninety-eight (38) answers say it’s A, and one (1) claims it’s B―it’s probably A.

When does “it” become an Appliance?

Sitting in a non-company presentation recently, I heard the word appliance used several times, and, even though I’ve been in the IT and telecommunications industry for years, I realized I didn’t technically know what appliance meant, how it was different than other networking equipment. I turned to the person seated at my left and asked, “What’s the difference between an appliance and a piece of networking equipment, be it a router, server, etc.?” The answer he provided offered little help. As an attempt to hide my dissatisfaction, I quietly whispered the same question to an engineer on my right. His answer could be only slightly construed as similar to the first response―slightly. In fact, the only true commonality between the answers came in the form of two (2) words―single function. Clear as Mississippi mud pie, right? During a break, I asked the question of several in attendance, and got answers that ran a mile wide and an inch deep, but provided, essentially, little information, possibly less than before.

I turned to Google, of course. But I discovered something I didn’t believe was possible―there was literally no definition or information I could find that even attempted to distinguish what, exactly, makes for a network appliance. According to “my history” in Google Chrome, I typed in over thirty (30) variations of the same question. Nothing. Frustrating. But I had something better than Google.

It works with governmental elections

GDT has over two-hundred (200) solutions architects and engineers, all talented and tenured, and have earned, collectively, well over one thousand (1,000) of the industry’s highest certifications. Why not poll some of the industry’s best and brightest with the question,” What differentiates an ‘appliance’ from other networking equipment?”

They weren’t allowed to reply “TO ALL” in the hopes that others’ answers wouldn’t influence theirs. Also, they couldn’t Google the question, or any derivative thereof, which, based on my experience, wouldn’t have helped anyway.

Drum roll, please

Responses came pouring in, even though it was after 5 PM on a Friday afternoon. So in lieu of posting well over one hundred (100) responses, I decided to craft, based on those responses (one was even a haiku), a definition of a network appliance related to how it’s differentiated from a non-appliance. Here goes…

A network appliance is different than a non-appliance because it comes pre-configured and is built with a specific purpose in mind.

And because I’m a fan of analogies, here’s one I received:

“You can make toast in the oven, but you’ve got a toaster, a device that is specifically made for making toast. Because it’s designed for a narrow problem set, the toaster is smaller than the oven, more energy efficient, easier to operate, and cheaper. An appliance is something that is able to be better than a general-purpose tool because it does less.”

And for you Haiku fans:

“It is a server

Or a virtual machine

That runs services”

There it is―a definition, an analogy, even a Haiku. Now don’t get me started on the word device.

Turn, like I did, to the experts

GDT’s team of solutions architects and engineers maintain the highest certification levels in the industry. They’ve crafted, installed and currently manage the networks and security needs of some of the largest enterprises and service providers in the world. They can be reached at SolutionsArchitects@gdt.com or at Engineering@gdt.com. Great folks; they’d love to hear from you.

Riding the Hyperconvergence Rails

If your organization isn’t on, or planning to get on, the road to hyperconvergence (HCI), you may soon be left waiving at your competitors as the HCI train flies by. A recent industry study found that approximately 25% of companies currently use hyperconvergence, and another 23% plan on moving to it by the end of this year. And those percentages are considerably higher in certain sectors, such as healthcare and government. In addition to the many benefits HCI delivers—software-defined storage (SDS), an easier way to launch new cloud services, modernization of application development and deployment, and far more flexibility for data centers and infrastructures—it is currently providing customers, according to the study, an average of 25% in OPEX savings. It might be time to step up to the ticket window.

All Aboard!

If you haven’t heard about Dell EMC’s VxRail appliances, it’s time you do―they’ve been around for about two (2) years now. In that first year alone, they sold in excess of 8,000 nodes to well over 1,000 customers. And in May of this year, they announced a significant upgrade to their HCI portfolio with the launch of more robust VxRail appliances, including significant upgrades to VxRack, its Software-Defined Data Center (SDDC) system. VxRail was closely developed with VMware, of which Dell EMC owns eighty percent (80%).

The VxRail Portfolio of Appliances

All VxRail appliances listed below offer easy configuration flexibility, including future-proof capacity and performance with NVMe cache drives, 25GbE connectivity, and NVIDIA P40 GPUs (graphics processing units). They’re all built on Dell EMC’s latest PowerEdge servers, which are powered by Intel Xeon Scalable processors, and are available in all-flash or hybrid configurations.

G Series―the G in G-Series stands for general, as in general purpose appliance. It can handle up to four (4) nodes in a 2U chassis.

E Series―whether deployed in the data center or at the edge (hence the letter E), the E Series sleek, low-profile can fit into a 1U chassis.

V Series―the V stands for video; it is VDI-optimized graphics ready and can support up to three (3) graphics accelerators to support high-end 2D or 3D visualization. The V Series appliance provides one (1) node in its 2U profile.

P Series―P for performance. Each P Series appliance is optimized for the heaviest of workloads (think databases). Its 2U profile offers one (1) node per chassis.

S Series―Storage is the operative word here, and the S Series appliance is perfect for storage dense applications, such as Microsoft Exchange or Sharepoint. And if big data and analytics are on your radar screen, the S Series appliance is the right one for you. Like the P and V Series appliances, the S Series provides one (1) node in its 2U profile.

And to help you determine which VxRail appliance is right for your organization, Dell EMC offers a nifty, simple-to-use XRail Right Sizer Tool.

Perfect for VMware Customers

VMware customers are already familiar with the vCenter Server, which provides a centralized management platform to manage VMware environments. All VxRail appliances can be managed through it, so there’s no need to learn a new management system.

Questions about Hyperconvergence or VxRail?

For more information about what hyperconvergence, including what Dell EMC’s VxRail appliances can provide for your organization, contact GDT’s solutions architects and engineers at SolutionsArchitects@gdt.com. They hold the highest technical certification levels in the industry, and have designed and implemented hyperconverged solutions, including ones utilizing GDT partner Dell EMC’s products and services, for some of the largest enterprises and service providers in the world. They’d love to hear from you.

When good fiber goes bad

Fiber optics brings to mind a number of things, all of them great: speed, reliability, high bandwidth, long distance transmission, immune to electromagnetic interference (EMI), and strength and durability. Fiber optics is comprised of fine glass, which might not sound durable, but flip the words fiber and glass and you’ve got a different story.

Fiberglass, as the name not so subtly suggests, is made up of glass fibers―at least partially. It achieves its incredible strength once it is combined with plastic. Originally used as insulation, the fiberglass train gained considerable steam in the 1970’s after asbestos, which had been widely used for insulation for over fifty (50) years, was found to cause cancer. But that’s enough about insulation.

How Fiber goes bad

As is often the case with good things, fiber optics doesn’t last forever. Or, it should be said, it doesn’t perform ideally forever. There are several issues that prevent it from delivering its intended goals.

Attenuation

Data transmission over fiber optics involves shooting light between input and output locations, and if the light intensity degrades, or loses its power, it’s known as attenuation. High attenuation is bad; low is good. There’s actually a mathematical equation that calculates the degree of attenuation, and this sum of all losses can be caused by a degradation in the fiber itself, poor splice points, or any point or junction where it’s connected.

Dispersion

When you shine a flashlight, the beam of light disperses over distance. This is dispersion. It’s expected, usually needed, when using a flashlight, but not your friend when it occurs in fiber optics. In fiber, dispersion occurs as a result of distance; the farther it’s transmitted, the weaker, or more degraded, the signal becomes. It must propagate enough light to achieve the bare minimum required by the receiving electronics.

Scattering

Signal loss or degradation can exist when there are microscopic variations in the fiber, which, well, scatters the light. Scattering can be caused by fluctuations in the fiber’s composition or density, and are most often due to issues in manufacturing.

Bending

When fiber optic cables are bent too much (and yes, there’s a mathematical formula for that), there can be a loss or degradation in data delivery. Bending can cause the light to be reflected at odd angles, and can be due to bending of the outer cladding (Macroscopic bending), or bending within it (Microscopic bending).

To the rescue―the Fiber Optic Characterization Study

Thankfully, determining the health of fiber optics doesn’t rely on a Plug it in and see if it works approach. It’s a good thing, considering there is an estimated 113,000 miles of fiber optic cable traversing the United States. And that number just represents “long haul” fiber, and doesn’t include fiber networks built within cities or metro areas.

Fiber Characterization studies determine the overall health of a fiber network. The study consists of a series of tests that ultimately determine if the fiber in question can deliver its targeted bandwidth. As part of the study, connectors are tested (which cause the vast majority of issues), and the types and degrees of signal loss are calculated, such as core asymmetry, polarization, insertion and optical return loss, backscattering, reflection and several types of dispersion.

As you probably guessed, Fiber Characterization studies aren’t conducted in-house, unless your house maintains the engineering skill sets and equipment to carry it out.

Questions about Fiber Characterization studies? Turn to the experts

Yes, fiber optics is glass, but that doesn’t mean it will last forever, even if it never tangles with its arch nemesis―the backhoe. If it’s buried underground, or is strung aerially, it does have a shelf life. And while its shelf life is far longer than its copper or coax counterparts, it will degrade, then fail, over time. Whether you’re a service provider or utilize your own enterprise fiber optic network, success relies on the three (3) D’s―dependable delivery of data. A Fiber Characterization Study will help you achieve those.

If you have questions about optical networking, including Fiber Characterization studies, contact The GDT Optical Transport Team at Optical@gdt.com. They’re highly experienced optical engineers and architects who support some of the largest enterprises and service providers in the world. They’d love to hear from you.

 

 

 

The Hyper in Hyperconvergence

The word hyper probably brings to mind energy, and lots of it, possibly as it relates to a kid who paints on the dining room wall or breaks things, usually of value. But in the IT industry, hyper takes on an entirely different meaning, at least when combined with its compound counterpart―visor.

Hyperconvergence, in regards to data center infrastructures, is a step-up from convergence, and a stepping stone to composable. And, of course, convergence is an upgrade from traditional data center infrastructures, which are still widely used but eschew the use of, among other things, virtualization. Traditional data center infrastructures are heavily siloed, requiring separate skill sets in storage, networking, software, et al.

The Hypervisor―the engine that drives virtualization

Another compound word using hyper is what delivers the hyper in hyperconvergence ― hypervisor. In hyperconvergence, hypervisors manage virtual machines (VMs), each of which can run its own programs but gives the appearance of running the host hardware’s memory, processor and resources. The word hypervisor sounds like a tangible product, but it’s software, and is provided by, among others, market leaders VMware, Microsoft and Oracle. This hypervisor software is what allocates those resources, including memory and processor, to the VMs. Think of hypervisors as a platform for virtual machines.

Two (2) Types of Hypervisors

Hypervisors come in two (2) flavors, and deciding between either comes down to several issues, including compatibility with existing hardware, the level and type of management required, and performance that will satisfy your organization’s specific needs. Oh, and don’t forget budgetary considerations.

Bare-Metal – Type 1

Type 1 hypervisors are loaded directly onto hardware that doesn’t come pre-loaded with an Operating System. Type 1 hypervisors are the Operating System, and are more flexible, provide better performance and, as you may have guessed, are more expensive than their Type 2 counterparts. They’re usually single-purpose servers that become part of the resource pools that support multiple applications for virtual machines.

Hosted – Type 2

A Type 2 hypervisor runs as an application loaded in the Operating System already installed on the hardware. But because it’s loaded on top of the existing OS, it creates an additional layer of programming, or hardware abstraction, which is another way of saying less efficient.

So which Type will you need?

In the event you’re looking to move to a hyperconverged infrastructure, both the type of hypervisor, and from which partner’s products to choose, will generate a spate of elements to evaluate, such as the management tools you’ll need, which hypervisor will perform best based on your workloads, the level of scalability and availability you’ll require, and, of course, how much you’ll be able to afford.

It’s a big decision, so consulting with hyperconvergence experts should probably be your first order of business. The talented solutions architects and engineers at GDT have delivered hyperconvergence solutions to enterprises and service providers of all sizes. They’d love to hear from you, and can be reached at SolutionsArchitects@gdt.com.

How does IoT fit with SD-WAN?

Now that computing has been truly pushed out to the edge, it brings up questions about how it will mesh with today’s networks. The answer? Very well, especially regarding SD-WAN.

IoT is comprised of three types of devices that make it work―sensors, gateways and the Cloud. No, smart phones aren’t one of the devices listed. In fact, and for simplicity’s sake, let’s not call smart phones devices. The technology sector is particularly adept at incorrectly utilizing words interchangeably. In this case, the confusing word is device. For instance, when you hear statistics about the estimated number of connected devices to be over 20 billion by 2020, smart phones are not part of that figure. While smart phones are often called devices and do have sensors that can detect tilt (gyroscope) and acceleration (accelerometer), IoT sensors extend beyond those devices (oops, I did it again; let’s call them pieces of equipment) that provide Internet connectivity―laptops, tablets and, yes, smart phones.

Sensors and Gateways and Clouds…oh my

Sensors are the edge devices, and can detect, among other things, temperature, pressure, water quality, existence of smoke or gas, et al. Think Ring Doorbell or Nest Thermostat.

The gateway can be either in hardware or software (sometimes both), and is used for the aggregation of connectivity, encryption and decryption of the IoT data.  Gateways translate protocols used in IoT sensors, including management, onboarding (storage and analytics) and edge computing. Gateways, as the name suggests, serve as a bridge between IoT devices, their associated protocols, such as Wi-Fi or Bluetooth, and the environment where the gathered data gets utilized.

SD-WAN and IoT

SD-WAN simplifies network management―period. And a subset of that simplicity comes in the form of visibility and predictability, which is exactly what IoT needs. SD-WAN can help ensure IoT devices in remote locations will get the bandwidth and security needed, which is especially important considering IoT devices don’t maintain a lot of computing power (for example, they usually don’t have enough to support Transport Layer Security (TLS)).

SD-WAN allows network managers the ability to segment traffic based on type―in this case, IoT―so device traffic can always be sent over the most optimal path. And SD-WAN traffic can be sent directly to a cloud services provider, such as AWS or Azure. In traditional architectures, such as MPLS, the traffic has to be backhauled to a data center, after which it is handed off to the Internet. Hello, latency―not good for IoT devices that need real-time access and updating.

SD-WAN is vendor-agnostic, and can run over virtually any existing topology, such as cellular, broadband and Wi-Fi, which makes it easier to connect devices in some of the more far-flung locations. And management can be accomplished through a central location, which makes it easier to integrate services across the IoT architecture of your choosing.

As mentioned earlier, there will be an estimated 20 billion IoT devices in use by 2020, up from 11 billion presently (by 2025…over 50 billion). The number of current endpoints being used is amazing, but the growth rate is truly staggering. And for IoT to deliver on its intended capabilities, it needs a network that can help it successfully deliver access to real-time data. That sounds like SD-WAN.

Here’s a great resource

To find out more about SD-WAN and exactly how it provides an ideal complement to IoT, contact GDT’s tenured SD-WAN engineers and solutions architects at SDN@gdt.com. They’ve implemented SD-WAN and IoT solutions for some of the largest enterprise networks and service providers in the world. They’d love to hear from you.

Unwrapping DevOps

As the name suggests, DevOps is the shortened combination of two (2) words―development and operations. Originally, application development was time-consuming, fraught with errors and bugs, and, ultimately, resulted in the bane of the business world―slow to market.

Prior to DevOps, which addresses that slow to market issue, application developers worked in sequestered silos. They would collaborate with operations at a minimum, if at all. They’d gather requirements from operations, write huge chunks of code, then deliver their results weeks, maybe months, later.

They primary issue that can sabotage any relationship, whether personal or professional―is a lack of communication. Now sprinkle collaboration into the mix, and you have DevOps. It broke down communication and collaboration walls that still exist – if DevOps isn’t being utilized – between the two (2). The result? Faster time to market.

Off-Shoot of Agile Development

DevOps, which has been around for approximately ten (10) years, was borne out of Agile Development, created roughly ten (10) years prior to that. Agile Development is, simply, an approach to software development. Agile, as the name suggests, delivers the final project with more speed, or agility. It breaks down software development into smaller, more manageable chunks, and solicits feedback throughout the development process. As a result, application development became far more flexible and capable of responding to needs and changes much faster.

While many use Agile and DevOps interchangeably, they’re not the same

While Agile provides tremendous benefits as it relates to software development, it stops short of what DevOps provides. While DevOps can certainly utilize Agile methodologies, it doesn’t drop off the finished product, then quickly move on to the next one. Agile is a little like getting a custom-made device that solves some type of problem; DevOps will make the device, as well, but will also install it in the safest and most effective manner. In short, Agile is about developing applications―DevOps both develops and deploys it.

How does DevOps address Time to Market?

Prior to DevOps and Agile, application developers would deliver their release to operations, which would be responsible for testing the resultant software. And when testing isn’t conducted throughout the development process, operations is left with a very large application, often littered with issues and errors. Hundreds of thousands of lines of code that access multiple databases, networks and interfaces can require a tremendous amount of man hours to test, which in turn takes those man hours off other pressing projects―inefficient, wasteful. And often there was no single person or entity responsible for overseeing the entire project, and each department may have different success metrics. Going back to the relationship analogy, poor communication and collaboration means frustration and dissatisfaction for all parties involved. And with troubled relationships comes finger-pointing.

Automation

One of key elements of DevOps is its use of automation, which helps to deliver faster, more reliable deployments. Through the use of automation testing tools currently available, like Selenium, Test Studio and TestNG, to name a few, test cases can be constructed, then run while the application is being built. This reduces testing times exponentially and helps ensure each of the processes and features have been developed error free.

Automation is utilized for more than just testing, however. Workflows in development and deployment can be automated, enhancing collaboration and communication and, of course, shortening the delivery process. Production-ready environments that have already been tested can be continuously delivered. Real-time reporting can provide a window into any changes, or defects, that have taken place. And automated processes mean fewer mistakes due to human error.

Questions about what DevOps can deliver to your organization?

While DevOps isn’t a product, it’s certainly an integral component to consider when evaluating a Managed Services Provider (MSP). GDT’s DevOps professionals have time and again helped to provide and deploy customer solutions that have helped shorten the time to market they’ve needed to enjoy positive business outcomes. For more information about DevOps and the many benefits it can provide to organizations of all sizes, contact GDT’s talented, tenured solutions architects at SolutionsArchitects@gdt.com. They’d love to hear from you.

How do you secure a Cloud?

Every organization has, has plans to, or wants to move to The Cloud. And by 2020, most will be there. According to a recent survey, within two (2) years 83% of enterprise workloads will be in The Cloud―41% on public Clouds, like AWS and Microsoft Azure, 20% will be private-Cloud based, and 22% as part of a hybrid architecture. With the amount of traffic currently accessing The Cloud, and considering the aforementioned survey figures, security will continue to be at the forefront of IT departments’ collective minds―as well it should.

With organizations selectively determining what will run in The Cloud, security can prove challenging. Now throw in DevOps’ ability to build and test Cloud apps easier and faster, and you’ve amped those Cloud security concerns significantly.

Security Solutions geared for The Cloud

To address the spate of Cloud-related security concerns, Cisco built an extensive portfolio of solutions, listed below, to secure customers’ Cloud environments, whether public, private, or a combination of both (hybrid).

Cisco Cloudlock

The Cloudlock DLP (Data Loss Prevention) technology doesn’t rest; it continuously monitors Cloud environments to detect sensitive information, then protect it. Cloudlock controls Cloud apps that connect to customers’ networks, enforces data security, provides risk profiles and enforces security policies.

Cisco Email Security

Cisco Email Security protects Cloud-hosted email, protecting organizations from threats and phishing attacks in the GSuite and in Office 365.

Cisco Stealthwatch Cloud

Stealthwatch Cloud detects abnormal behavior and threats, then quickly quells it before it evolves into a disastrous breach.

Cisco Umbrella

Cisco Umbrella provides user protection regardless of the type, or location, of Internet access. It utilizes deep threat intelligence to provide a safety net—OK, an umbrella—for users by preventing them access to malicious, online destinations, and thwarts any suspect callback activities.

Cisco SaaS Cloud Security

If users are off-network, anti-virus software is often the only protection available. Cisco’s AMP (Advanced Malware Protection) for Endpoints prevents threats at their point of entry, and continuously tracks each and every file that accesses those endpoints. AMP can uncover the most advanced of threats, including ransomware and file-less malware.

Cisco Hybrid Cloud Workload Protection

Cisco Tetration, which is their proprietary analytics system, provides workload protection for MultiCloud environments and data centers. It uses zero-trust segmentation, which enables users to quickly identify security threats and reduce their attack surface (all endpoints where threats can gain entry). It supports on-prem and public Cloud workloads, and is infrastructure-agnostic.

Cisco’s Next-Gen Cloud Firewalls

Cisco’s VPN capabilities and virtual Next-Gen Firewalls provide flexible deployment options, so protection can be administered exactly where and when it’s needed, whether on-prem or in the Cloud.

For more information…

With the help of its state-of-the-art Security Operations Center (SOC), GDT’s team of security professionals and analysts have been securing the networks of some of the most noteworthy enterprises and service providers in the world. They’re highly experienced at implementing, managing and monitoring Cisco security solutions. You can reach them at SOC@gdt.com. They’d love to hear from you.

 

Flash, yes, but is it storage or memory?

We’ve all been pretty well trained to believe that, at least in the IT industry, anything defined or labeled as “flash” is a good thing. It conjures up thoughts of speed (“in a flash”), which is certainly one of most operative words in the industry―everybody wants “it” done faster. But the difference between flash memory and flash storage is often confused, as both not only store information, but are both referred to as Solid State Storage. For instance, a thumb drive utilizes flash memory, but is considered a storage device, right? And both are considered solid state storage devices, which means neither is mechanical, but electronic. Mechanical means moving parts, and moving parts means prone to failure from drops, bumps, shakes or rattles.

Flash Memory―short-term storage

Before getting into flash memory, just a quick refresher on what memory accomplishes. Memory can be viewed as short-term data storage, maintaining information that a piece of hardware is actively using. The more applications you’re running, the more memory is needed. It’s like a workbench, of sorts, and the larger its surface area, the more projects you can be working on at one time. When you’re done with a project, you can store it long-term (data storage), where it’s easily retrieved when needed.

Flash memory accomplishes its tasks in a non-volatile manner, meaning it doesn’t require power to function. It’s quickly accessible, smaller in size, and more durable than volatile memory, such as RAM (Random Access Memory), which requires the device to be powered on to access. And once it’s turned off, data in RAM is gone.

Flash Storage―storage for the long term

Much like a combustion engine, flash storage, the engine, needs flash memory, the fuel, to run. It’s nonvolatile (doesn’t require power), and utilizes one of two (2) types of flash memory―NAND or NOR.

NAND flash memory writes and reads data in blocks, while NOR does it in independent bytes. NOR flash is faster and more expensive, and better for processing small amounts of code―it’s often used in mobile phones. NAND flash is generally used for devices that need to upload and/or replace large files, such as photos, music or videos.

Confusion between flash storage and flash memory might be non-existent for some, maybe even most, but it’s astounding how much information either confuses the two (2) or does a poor job differentiating them.

Contact the Flash experts

For more information about flash storage, including all-flash arrays, which contain many flash memory drives and are ideal for large enterprise and data center solutions, contact the talented, tenured solutions architects and engineers at GDT. They’re experienced at designing and implementing storage solutions, whether on-prem or in the cloud, for enterprises of all sizes. You can reach them at Engineering@gdt.com.

When considering an MSP, don’t forget these letters: ITSM and ITIL

It’s not hard to find a Managed Services Provider (MSP); the hard part is finding the right one. Of course, there are many, many things to consider when evaluating MSPs, including the quality of its NOC and SOC (don’t forget the all-important SOC), the level of experienced professionals who manage and maintain it on a second-by-second basis, the length of time they’ve been providing managed services, the breadth and depth of their knowledge, and the range of customer sizes and industries they serve. But there’s something else that should be considered, and asked about, if you’re evaluating Managed Services Providers (MSP)―whether they utilize ITSM and ITIL methodologies.

ITSM (Information Technology Service Management)

ITSM is an approach for the design, delivery, management and overall improvement of an organization’s IT services. Quality ITSM delivers the right people, technology, processes and toolsets to address business objectives. If you currently manage IT services for your organization, you have, whether you know it or not, an ITSM strategy. Chances are that if you don’t know you have one, it might not be very effective, which could be one (1) of the reasons you’re evaluating MSPs.

Ensure the MSPs you’re evaluating staff their NOC and SOC with professionals who adhere to ITSM methodologies. If an ITSM is poorly constructed and doesn’t align with your company’s goals, it will negatively reflect on whether ITIL best practices can be achieved.

ITIL (Information Technology Infrastructure Library)

ITIL is a best practices framework that helps align IT with business needs. It outlines complete guidelines for five (5) key IT lifecycle service areas: Service Design, Service Strategy, Service Transition, Service Operations, and Continued Service Improvement. ITIL’s current version is 3 (V.3), so not only ensuring they follow ITIL methodologies is important, but make certain they’re well-versed in ITIL V.3., which addresses twenty-eight (28) different business processes that affect a company’s ITSM.

Here’s the difference in ITSM and ITIL that you need to remember

ITSM is how IT services are managed. ITIL is a best practices framework for ITSM. So, put simply, ITSM is what is what you do, and ITIL is how to do it. ITIL helps make sense of ITSM processes. ITIL isn’t the only certification of its type in the IT industry, but is undoubtedly the most widely used.

Without understanding the relationship between ITSM and ITIL, companies won’t gain business agility, operational transparency, and reductions in downtime and costs. And if your MSP doesn’t understand that relationship, they’re far less likely to deliver the aforementioned benefits.

For more info, turn to Managed Services Experts

Selecting an MSP is a big decision. Turning over the management of your network and security can be a make-or-break decision. Ensuring that they closely follow ITSM and ITIL methodologies is critically important.

For more information about ITSM and ITIL, contact the Managed Services professionals at GDT. They manage networks and security for some of the largest companies and service providers in the world from their state-of-the-art, 24x7x365 NOC and SOC. You can reach them at MSSales@gdtadvancedsolutions.com.

The story of the first Composable Infrastructure

In 2016, HPE introduced the first composable infrastructure solution to the marketplace. Actually, they didn’t just introduce the first solution, they created the market. HPE recognized, along with other vendors and customers, some of the limitations inherent in hyperconvergence, which provided enterprise data centers a cloud-like experience with on-premises infrastructures. But HPE was the first company to address these limitations, such as the requirement for separate silos for compute, storage and network. What this meant was that if there was a need to upgrade one of those silos, the others had to be upgraded, as well, even if it wasn’t needed. And hyperconvergence required multiple programming interfaces; with composable, a unified API can transform the entire infrastructure with a single line of code.

HPE Synergy

HPE Synergy was the very first “ground-up” built composable infrastructure platform, and is the very definition of HPE’s Idea Economy, which is a concept to address, in their words, the belief “that disruption is all around us, and the ability is needed to turn an idea into a new product or a new industry.”

HPE set out to address the elements that proved difficult, if not impossible, with traditional technology, such as the ability to:

  • Quickly deploy infrastructure through flexibility, scaling and updating
  • Run workloads anywhere, whether on physical or virtual servers…even in containers
  • Operate any workload without worrying about infrastructure resources or compatibility issues
  • Ensure the infrastructure can provide the right service levels to drive positive business outcomes

Hardware

The foundation of HPE’s Composable Infrastructure is the HPE Synergy 12000 frame (ten (10) rack units (RU)), which combines compute, storage, network and management into a single infrastructure. The frame’s front module bays easily accommodate and integrate a broad array of compute and storage modules. There are two (2) bays for management, with the Synergy Composer loaded with HPE OneView software to compose storage, compute and network resources in customers’ configuration of choice. And OneView templates are provided for provisioning of each of the three (3) resources (compute, storage and network), and can monitor, flag, and remediate server issues based on the profiles associated with them.

Frames can be added as workloads increase, and a pair of Synergy Composer appliances can manage, with a single management domain, up to twenty-one (21) frames.

A Unified API

The Unified API allows users, through the Synergy Composer user interface, to access all management functions. It operates at a high abstraction level and makes actions repeatable, which greatly saves time and reduces errors. And remember, a single line of code can address compute, storage and network, which greatly streamlines and accelerates provisioning, and allows DevOps teams to work and develop more rapidly.

Compute

HPE Compute modules, which come in a wide variety based on types of workloads required, create a pool of flexible capacity that can be configured to rapidly―practically instantaneously―provision the infrastructure for a broad range of applications. All compute modules deliver high levels of performance, scalability, and simplified storage and configurations.

Storage

Composable storage with HPE Synergy is agile and flexible, and offers many options that can address a variety of storage needs, such as SAS, SFF, NVMe SFF, Flash uFF, or diskless.

Network (aka Fabric)

HPE Synergy Composable Fabric simplifies network connectivity by using disaggregation to create a cost-effective, highly available and scalable architecture. It creates pools of flexible capacity that provisions rapidly to address a broad range of applications. It’s enabled by HPE Virtual Connect, and can match workload performance needs with its low latency, multi-speed architecture. This one device can converge traffic across multiple frames (creating a rack scale architecture) and directly connects to external LANs.

Talk to the experts

For more information about HPE Synergy and what it can provide to your organization, contact the talented, tenured solutions architects and engineers at GDT. They’re experienced at designing and implementing composable and hyperconverged solutions for enterprises of all sizes. You can reach them at Engineering@gdt.com.

 

Composable Infrastructure and Hyperconvergence…what’s the difference?

You can’t flip through a trade pub for more than twenty (20) seconds without reading one of these two (2) words, probably both: composable and hyperconvergence. Actually, there’s an extremely good chance you’ll see them together, considering both provide many of the same benefits to enterprise data centers. But with similarities comes confusion, leaving some to wonder when, or why, should one be used instead of the other. To add fuel to those flames of confusion, hyperconvergence and composable can, and often are, used together, even complement each other quite well. But, if nothing else, keep one (1) primary thought in mind―composable is the evolutionary next step from hyperconvergence.

In the beginning…

Hyperconvergence revolutionized data centers by providing them a cloud-like experience with an on-premises infrastructure. Since its inception approximately six (6) years ago (its precise age is up for debate), the hyperconvergence market has grown to just north of $3.5B. Hyperconvergence reduces a rack of servers down to a small, 2U appliance, combining server, software-defined storage, and virtualization. Storage is handled with software to manage storage nodes, which can be either physical or virtual servers. Each node runs virtualization software identical to other nodes, allowing for a single, virtualized storage pool comprised of the combined nodes. It’s all software-managed, and is especially handy in the event of equipment, or node, failure.

However, Hyperconvergence, for all its benefits, has one (1) primary drawback―storage and compute must be scaled together, even if one or the other doesn’t need to be scaled at that very moment. For instance, if you need to add storage, you also have to add more compute and RAM. With composable infrastructures, you can add the needed resources independently of one another. In short, hyperconvergence doesn’t address as many workloads as composable infrastructure.

…then there was composable

Whomever coined the term Composable Infrastructure is up for debate, but HPE was definitely the first to deliver it to the marketplace with its introduction of HPE Synergy in 2016. Today there are many vendors, in addition to HPE, offering composable solutions, most notably Cisco’s UCS and Dell EMC’s VxBlock. And each of these aforementioned solutions satisfies the three (3) basic goals of composable infrastructures:

  • Software-Defined intelligence
    • Creates compute, storage and network connectivity from pooled resources to deploy VMs, on-demand servers and containers.
  • Access to a fluid pool of resources
    • Resources can be sent to support needs as they arise. The pools are like additional military troops that are deployed where and when they’re needed.
  • Management through a single, unified API
    • A unified API means the deployment of infrastructure and applications is faster and far easier; code can be written once that addresses compute, storage and network. Provisioning is streamlined and designed with software intelligence in mind.

Talk to the experts

For more information about hyperconverged or composable infrastructures, contact the talented, tenured solutions architects and engineers at GDT. They’re experienced at designing and implementing hyperconverged and composable solutions for enterprises of all sizes. You can reach them at Engineering@gdt.com.

 

Intent-Based Networking (IBN) is all the buzz

You may or may not have heard of it, but if you fall into the latter, it won’t be long until you do―probably a lot. Network management has always been associated with several words, none of them very appealing to IT professionals: manual, time-consuming and tedious. An evolution is taking place to take those three (3) elements out of network management―Intent-Based Networking, or IBN.

It’s software

Some suggest that intent-based networking isn’t a product, but a concept or philosophy. Opinions aside, its nomenclature is confusing because “intent-based networking” doesn’t include an integral word―software.

Intent-based networking removes manual, error-prone network management and replaces it with automated processes that are guided by network intelligence, machine learning and integrated security. According to several studies regarding network management, it’s estimated that anywhere from 75% to 83% of network changes are currently conducted via CLI’s (Command Line Interfaces). What this ultimately means is that CLI-driven network changes, which are made manually, are prone to mistakes, the number of which depends on the user making the changes. And resultant network downtime from those errors means headaches, angry users and, worst of all, a loss of revenue. And if revenue generation is directly dependent on the network being up, millions of dollars will be lost, even if the network is down for a short period of time.

How does IBN work?

In the case of intent-based networking, the word intent simply means what the network “intends” to accomplish. It enables users to configure how, exactly, they intend the network to behave by applying policies that, through the use of automation and machine learning, can be pushed out to the entire infrastructure.

Wait a minute, IBN sounds like SDN

If you’re thinking this, you’re not the only one. They sound very similar, what with the ease of network management, central policy setting, use of automation, cost savings and agility. And to take that a step further, IBN can use SDN controllers and even augment SDN deployments. The main difference, however, lies in the fact that IBN is concerned more with building and operating networks that satisfy intent, rather than SDN’s focus on virtualization (creating a single, virtual network by combining hardware and software resources and functionality).

IBN―Interested in What is needed?

IBN first understands what the network is intended to accomplish, then calculates exactly how to do it. With apologies to SDN, IBN is simply smarter and more sophisticated. If it sounds like IBN is the next evolution of SDN, you’re right. While the degree or level of evolution might be widely argued, it would take Clarence Darrow to make a good case against evolution altogether. (Yes, I’m aware of the irony in this statement.)

Artificial Intelligence (AI) and Machine Learning

Through advancements in AI and algorithms used in machine learning, IBN enables network administrators to define a desired state of the network (intent), then rely on the software to implement infrastructure changes, configurations and security policies that will satisfy that intent.

Elements of IBN

According to Gartner, there are four (4) elements that define intent-based networking. And if they seem a lot like SDN, you’re right again. Basically, it’s only the first element that really distinguishes IBN from SDN.

  1. Translation and Validation– The end user inputs what is needed, the network configures how it will be accomplished, and validates whether the design and related configurations will work.
  2. Automated Implementation– Through network automation and/or orchestration, the appropriate network can be configured across the entire infrastructure.
  3. Awareness of Network State– The network is monitored in real-time, and is both protocol- and vendor-agnostic.
  4. Assurance and Dynamic Optimization/Remediation– Continuous, real-time validation of the network is taken, and corrective action can be administered, such as blocking traffic, modifying network capacity, or notifying network administrators that the intent isn’t being met.

IBN―Sure, it’s esoteric, but definitely not just a lot of hype

If you have questions about intent-based networking and what it can do for your organization, contact one of the networking professionals at GDT for more information. They’ve helped companies of all sizes, and from all industries, realize their digital transformation goals. You can reach there here:  Engineering@gdt.com. They’d love to hear from you.

Open and Software-Driven―it’s in Cisco’s DNA

Cisco’s Digital Network Architecture (DNA), announced to the marketplace approximately two (2) years ago, brings together all the elements of an organization’s digital transformation strategy: virtualization, analytics, automation, cloud and programmability. It’s an open, software-driven architecture that complements its data center-based Application-Centric Infrastructure (ACI) by extending that same policy-driven, software development approach throughout the entire network, including campuses and branches, be they wired or wireless. It’s delivered through the Cisco ONE™ Software family, which enables simplified software-based licensing and helps protect software investments.

What does all of that really mean?

With Cisco DNA, each network device is considered part of a unified fabric, which allows IT departments a simpler and more cost-effective means of really taking control of their network infrastructure. Now IT departments can react at machine speed to the quick changing of business needs, including security threats, across the entire network. Prior to Cisco DNA, reaction times relied on human-powered workflows, which ultimately meant making changes one device at a time. Now they can interact with the entire network through a single fabric, and, in the case of a cyber threat, they can address it in real-time.

With Cisco DNA, companies can address the entire network as one, single programmable platform. Ultimately, employees and customers will enjoy a highly enhanced user experience.

The latest buzz―Intent-based Networking

Cisco DNA is one of the company’s answers to the industry’s latest buzz phrase―Intent-based networking. In short, intent-based networking takes the network management of yore (manual, time-consuming and tedious) and automates those processes. It accomplishes this by taking deep intelligence and integrated security to deliver network-wide assurance.

Cisco DNA’s “five (5) Guiding Principles”:

  1. Virtualizeeverything. With Cisco DNA, companies can enjoy the freedom of choice to run any service, anywhere, and independent of underlying platforms, be they virtual, physical, on-prem or in the cloud.
  2. Automate for easy deployment, maintenance and management―a real game-changer.
  3. Provide Cloud-delivered Service Management that combines the agility of the cloud with security and the control of on-prem solutions.
  4. Make it open, extensible and programmable at every layer, with open APIs (Application Programming Interfaces) and a developer platform to support an extensive ecosystem of network-enabled applications.
  5. Deliver extensive Analytics, which provide thorough insights on the network, the IT infrastructure and the business.

Nimble, simple and network-wide―that’s GDT and Cisco DNA

If you haven’t heard of either intent-based networking or Cisco’s DNA, contact one of the networking professionals at GDT for more information. They’ve helped companies of all sizes, and from all industries, realize their digital transformation goals. You can reach them here:  Engineering@gdt.com. They’d love to hear from you.

SD-WAN: Demystifying Overlay, Underlay, Encapsulation & Network Virtualization

Following will be more details on the subject, but let’s just get this out of the way first: SD-WAN is a virtual, or overlay, network; the physical, or underlay, network is the one on which the overlay network resides. Virtual overlay networks contain nodes and links (virtual ones, of course) and allow new services to be enabled without re-configuring the entire network. They are secure and encrypted, and are independent of the underlay network, whether it’s MPLS, ATM, Wi-Fi, 4G, LTE, et al. SD-WAN is transport agnostic―no offense, but it simply doesn’t care about the means of transport you’ve selected.

While the oft-mentioned benefits of SD-WAN include cost savings, ease of management and the ability to prioritize traffic, they also provide many other less mentioned benefits, including:

  • The ability for developers to create and implement applications and protocols more easily in the cloud,
  • More flexibility for data routing through multi-path forwarding, and
  • The easy shifting of virtual machines (VMs) to different locations, but without the constraints of the physical, underlay network.

Overlay networks have been around for a while; in fact, the Internet is an overlay network that, originally, ran across the underlay Public Switched Telephone Network (PSTN). In fact, in 2018 most overlay networks, such as VoIP and VPNs, run atop the Internet.

Encapsulation

According to Merriam-Webster, the word encapsulation means “to enclose in or as if in a capsule.” And that’s exactly what occurs in SD-WAN, except the enclosure isn’t a capsule, but a packet. The encapsulation occurs within the physical network, and once the primary packet reaches its destination, it’s opened to reveal the inner, or encapsulated, overlay network packet. If the receiver of the delivered information isn’t authenticated, they won’t be able to access it.

Network Virtualization

SD-WAN (including SDN) and Network Virtualization are often used interchangeably, but the former is really a subset of the latter. They both, through the use of software, connect virtual machines (VMs) that mimic physical hardware. And both allow IT managers to consolidate multiple physical networks, divide them into segments, and ultimately enjoy easier network management, automation, and improved speed.

Don’t leave your network to chance

WANs and LANs are the lifeblood of IT departments. If you’re considering SD-WAN and would like to enjoy the benefits it can, if deployed optimally, deliver, calling on experienced SD-WAN solutions architects and engineers should be your first order of business. Even though SD-WAN is widely touted as a simple, plug-n-play networking solution, there are many things to consider in addition to those wonderful benefits you’ve been hearing about for years. For instance, the use of multiple software layers can require more overhead, and the process of encapsulation can place additional demands on computing. Yes, there’s a lot to consider.

SD-WAN experts like those at GDT can help lead you down this critically important element of your digital transformation journey. They’ve done just that for enterprises of all size, and from a wide range of industries. You can reach their experienced SD-WAN solutions architects and engineers at SDN@gdt.com. They’d love to hear from you.

Dispelling myths about SD-WAN

Many of the misrepresentations of truth (OK, myths) that get bandied about regarding SD-WAN come from MPLS providers or network engineers who are happy with their current architecture and/or dread the thought of change. There’s no question, MPLS has been a great transport technology over the past fifteen (15) years or so, and its removal of Data Layer (OSI’s layer 2) dependency to provide QoS (Quality of Service) across the WAN was a considerable step up from legacy solutions, such as frame relay and ATM. And it’s still a great, and widely used, transport protocol, and can be effectively utilized with SD-WAN. So, let’s start with this first myth…

SD-WAN is a replacement for MPLS

No question, SD-WAN is perfect for replacing MPLS in certain instances, especially as it pertains to branch offices. MPLS isn’t cheap, and provisioning it at each location requires a level of on-site expertise. Now consider the associated costs and hassles when a company has hundreds of locations. However, given the stringent QoS demands that exist with many organizations, MPLS is still used to satisfy many of those, but can perfectly augment SD-WAN, as well. MPLS provides very high, and reliable, packet delivery, and many companies use it solely for traffic requiring QoS, and push everything else across the SD-WAN.

SD-WAN and WAN Optimization are the same thing

WAN Optimization was designed to address traffic traversing legacy networks, like frame relay and ATM. It was a way to squeeze the most of an existing network without having to expensively upgrade bandwidth at each site. Basically, the cost of bandwidth outgrew the need for more of it, and WAN Optimization, through caching and protocol optimization, allowed users to download cached information from a file that had already been downloaded―faster, more efficient use of bandwidth. But WAN Optimization can work in conjunction with SD-WAN, as it reduces latency across (very) long-distance WAN locations, satisfies certain QoS needs through data compression, and addresses TCP/IP protocol limitations.

SD-WAN is nothing more than a cost savings play

No question, SD-WAN is less costly than MPLS, and utilizes inexpensive, highly commoditized Internet connections. But there is a long list of reasons to utilize SD-WAN that go above and beyond savings. It’s far easier to deploy than MPLS and can be centrally-managed, which is ideal for setting policies, then pushing them out to all SD-WAN locations. SD-WAN works with the transport protocol of your choosing, whether that’s MPLS, 4G, Wi-Fi, and others. And there’s no longer a requirement to lease lines from only one (1) service provider, so customers can enjoy far greater flexibility and the ability to monitor circuits regardless of provider used.

SD-WAN requires a hybrid solution

Hybrid WANs, which utilize two (2) or more transport technologies across the WAN, are certainly not an SD-WAN requirement, but definitely work beautifully within that architecture. For instance, it’s not uncommon for organizations to utilize legacy networks for time-sensitive traffic, and SD-WAN for offloading certain applications to their corporate data center. A hybrid solution can allow for the seamless flow of traffic between locations so that, in the event one link experiences loss or latency, the other can instantly take over and meet associated SLAs.

Here’s one that’s NOT a myth: if you’d like to implement SD-WAN, you should turn to professionals who specialize in it

To enjoy all that SD-WAN offers, there are a spate of things to consider, from architectures and applications, to bandwidth requirements and traffic prioritization. SD-WAN is often referred to as a simple plug-n-play solution, but there’s more to it than meets the eye. Yes, it can be a brilliant WAN option, but not relying on experts in SD-WAN technology may soon leave you thinking, All that SD-WAN hype is just that…hype!

Working with SD-WAN experts like those at GDT can help bring the technology’s many benefits to your organization and leave you thinking, “It’s no hype…SD-WAN is awesome.” They’ve done just that for many enterprises―large, medium and small. You can reach their experienced SD-WAN solutions architects and engineers at SDN@gdt.com. They’d love to hear from you.

Flexible deployment to match unique architectural needs

In late 2017, tech giant VMware purchased VeloCloud, which further strengthened and enhanced its market-leading position transitioning enterprises to a more software-defined future. The acquisition greatly built on the success of its leading VMware NSX virtualization platform, and expanded its portfolio to address branch transformation, security, end-to-end automation and application continuity from the data center to cloud edge.

Referred to as NSX SD-WAN, VeloCloud’s solution allows for flexible deployment and secure connectivity that easily scales to meet the demands of enterprises of all sizes―and they know about “all sizes.” VMware provides compute, mobility, cloud networking and security offerings to over 500,000 customers throughout the world.

NSX SD-WAN satisfies the following key WAN needs:

Scalability

From a central location, through a single pane-of-glass, enterprises of all sizes can build out branches in―literally―a matter of minutes, and set policies that are automatically pushed out to branch SD-WAN routers. Save the costs of sending out a CCIE to the branch office Timbuktu or Bugtussle, and use the savings on other initiatives.

Security

With cloud applications, BYOD, and the need to utilize the cellular or broadband transport of users’ choosing, security is, as well it should be, of the utmost importance. The robust NSX SD-WAN architecture secures data and traffic through a secure overlay of the type of transport, regardless of the service provider. Best of all, it returns the ability to manage security, control and compliance from a central location.

Bandwidth Demands

With the growing―and growing―use of cloud applications, the need to utilize less expensive bandwidth is critically important. NSX SD-WAN can aggregate circuits to offer more bandwidth and deliver optimal cloud application performance.

Cloud Applications

If your employees aren’t currently spending an inordinate amount of time in the cloud, they will be. NSX SD-WAN provides direct access to the cloud, bypassing the need by MPLS networks to first backhaul traffic to a data center, then to the cloud. With that comes latency and a less than satisfying cloud experience.

NSX SD-WAN―Architecture friendly

When you’ve got over a half million customers around the world, it’s imperative to provide a solution that takes into account the many architectures that have been deployed. Regardless of the type of SD-WAN required―whether Internet-only or a Hybrid solution utilizing an existing MPLS network―NSX SD-WAN can satisfy the need.

GDT’s team of expert SD-WAN solutions architects and engineers have implemented SD-WANs for some of the largest enterprises and service providers in the world. For more information about what SD-WAN can provide for your organization, contact them at sdn@gdt.com. They’d love to hear from you.

 

How Companies are Benefiting from IT Staff Augmentation

Companies have been augmenting their IT departments for years with professionals who can step in and make an immediate impact by utilizing their skill sets and empirical expertise. And it’s not limited to engineers or solutions architects. Project managers, high-level consultants, security analysts, DevOps professionals, cabling experts…the list is only limited by what falls within the purview of IT departments. It’s the perfect solution when a project or initiative has a finite timeline and requires a very particular level of expertise. And it can address a host of other benefits, as well, by providing:

Greater Flexibility

Change and evolving business needs go hand-in-hand with information technology. Now more than ever, IT departments are tasked with the need to create more agile, cutting edge business solutions, and their need to quickly adapt can be easily be a make-it-or-break-it proposition for companies. You might not have the time or money to quickly find those individuals who can help expedite your company’s competitive advantage(s) in the marketplace.

Cost Effectiveness

Bringing an IT professional onboard full-time to focus on a particular project can be cost prohibitive if you’re left wondering how they can be utilized once the project is completed. And, of course, there are the costs of benefits to consider, as well. According to the U.S. Department of Labor, benefits are worth about 30% of compensation packages.

Reduced Risk and More Control

Augmenting IT staff, rather than outsourcing an entire project, can not only help ensure the right skill sets are being utilized, but risk can be mitigated by maintaining oversight and control in-house.

Quicker, Easier Access to the right IT pro’s

Thankfully unemployment is lower than it’s been in years, and in the IT industry it’s less than half the national average. So quickly finding the right person with the perfect skill sets can seem harder than finding a needle in a haystack. Companies’ recruiting efforts don’t focus exclusively on IT; they’re filling jobs in finance, marketing, HR, manufacturing, et al. Turning to IT staff augmentation experts who maintain large networks of professionals can uncover the right personnel quickly.

An answer to Attrition

Remember that low jobless rate in the IT sector? Sure, it’s great news, but it also means there’s a lot of competition for the right resources. There will be attrition―it’s a given. And utilizing staff augmentation can help combat that by placing individuals on specific projects and initiatives for a designated period of time.

Call on the Experts

If you have questions about augmenting your IT staff with the best and brightest the industry has to offer, contact the GDT Staffing Services professionals at staffaug@gdt.com. Some of the largest, most noteworthy companies in the world have turned to GDT so key initiatives can be matched with IT professionals who can help drive those projects to completion. They possess years of IT experience and expertise, and maintain a vast network of IT professionals who maintain the highest levels of certification in the industry. They’d love to hear from you.

The Plane Truth about SD-WAN

You can’t get more than a few words into any article, blog or brochure about SD-WAN without reading how the control and data planes are separated. For many, this might fall under the As long as it works, I don’t really care about it heading. And that’s evident based on a lot of the writing on the subject―it’s mentioned, but that’s about as far as the explanation goes. But the uncoupling of the control and data plane in SD-WAN is a fairly straightforward, easy to understand concept.

Control Plane comes first…

Often regarded as the brains of the network, the control plane is what controls the forwarding of information within the network. It controls routing protocols, load balancing, firewall configurations, et al., and determines the route data will take across the network.

…then Data Plane

The data plane forwards the traffic based on information it receives from the control plane. Think UPS. The control plane is dispatch telling the truck(s) where to go and exactly how to get there; the truck delivering the item(s) is the data plane.

So why is separating the control plane and data plane in SD-WAN a good thing?

In traditional WAN hardware, such as routers and switches, both the control plane and data plane are embedded into the equipment’s firmware. Setting up, or making changes to, a new location requires that the hardware be accessed and manually configured (see Cumbersome, Slow, Complicated). With SD-WAN, the de-coupled control plane is imbedded in software, so network management is far simpler and can be overseen and handled from a central location.

Here are a few more benefits that SD-WAN users are enjoying as a result of the separation of the Control and Data Planes:

  • Easier deployment; SD-WAN routers, once connected, are automatically authenticated and receive configuration information.
  • Real-time optimal traffic path detection and routing.
  • Traffic that’s sent directly to a cloud services provider, such as AWS or Azure, and not backhauled to a data center first, only then to be handed off to the Internet.
  • A significant reduction in bandwidth costs when compared to MPLS.
  • Network policies that no longer have to be set for each piece of equipment, but can be created once and pushed out to the entire network.
  • Greatly reduced provisioning time; a secondary Internet circuit is all that’s needed, so weeks spent awaiting the delivery of a new WAN circuit from a service provider is a thing of the past.
  • A Reduction of costs, headaches and hassles thanks to SD-WAN’s agnostic approach to access type and/or service provider.

Call on the SD-WAN experts

Enterprises and service providers are turning to SD-WAN for these, and many other, reasons, but there are a lot of architectures (overlay, in-net, hybrid) and SD-WAN providers from which to choose. And, like anything else regarding the health and well-being of your network, due diligence is of the utmost importance. That’s why enlisting the help and support of SD-WAN solutions architects and engineers will help ensure that you’ll be able to enjoy the most that SD-WAN can offer.

To find out more about SD-WAN and the many benefits it can provide your organization, contact GDT’s tenured SD-WAN engineers and solutions architects at SDN@gdt.com. They’ve implemented SD-WAN solutions for some of the largest enterprise networks and service providers in the world. They’d love to hear from you.

Cisco’s Power of v

In April of 2017, Cisco put both feet into the SD-WAN waters with their purchase of San Jose, Ca.-based Viptela, a privately held SD-WAN company. One of the biggest reasons for the acquisition was its ability to easily integrate Viptela software into Cisco’s platforms. Prior to the acquisition, Cisco’s SD-WAN solution utilized its own IWAN software, which delivered a somewhat complex, unwieldy option. The merger of IWAN and Viptela formed what is now called, not surprisingly, Cisco SD-WAN.

Questions concerning the agility and effectiveness of Cisco SD-WAN can best be answered from the following quote published by Cisco customer Agilent Technologies, a manufacturer of laboratory instruments:

“Agilent’s global rollout of Cisco SD-WAN enables our IT teams to respond rapidly to changing business requirements. We now achieve more than 80% improvement in turnaround times for new capability and a significant increase in application reliability and user experience.”

The following four (4) “v” components are what comprise Cisco’s innovative SD-WAN solution.

Controller (vSmart)

What separates SD-WAN from those WAN technologies of the past is its decoupling of the Data Plane, which carries the traffic, from the Control Plane, which directs it. With decoupling, the controls are no longer maintained in equipment’s firmware, but in software that can be centrally managed. Cisco’s SD-WAN controller is called vSmart, which is cloud-based and uses Overlay Management Protocol (OMP) to manage control and data policies.

vEdge routers

Cisco’s SD-WAN routers are called vEdge, and receive data and control policies from the vSmart controller. They can establish secure IPSec tunnels between other vEdge routers, and can be either on-prem or installed on private or public clouds. They can run  traditional routing protocols, such as OSPF or BGP, to satisfy LAN needs on one side, WAN on the other.

vBond―the glue that holds it together

vBond is what connects and creates those secure IPSec tunnels between vEdge routers, after which key intel, such as IP addressing, is communicated to vSmart and vManage.

vManage

Managing the WAN traffic from a centralized location is what makes SD-WAN, well…SD-WAN. vManage provides that dashboard through a fully manageable, graphical interface from which policies and communications rules can be monitored and managed for the entire network. Different topologies can be designed and implemented through vManage, whether it’s hub and spoke, spoke to spoke, or to address specific needs to accommodate different access types.

To enjoy the Power of v, contact the experts at GDT

GDT has been a preferred Cisco partner for over 20 years, and its expert SD-WAN solutions architects and engineers have implemented SD-WANs for some of the largest enterprises and service providers in the world. Contact them at sdn@gdt.com. They’d love to hear from you.

SDN and SD-WAN: A Father & Son Story

SD-WAN (software-defined WAN) has been all the rage for a few years now, coming to the rescue of enterprises that had spent considerable chunks of their IT budgets on MPLS to connect offices scattered through the world. But it’s not to be confused with SDN (software-defined networking), which, even though they both share “software-defined” in their titles, is different. Think of SDN as the parent technology, and SD-WAN as its up-and-coming son. Yes, they’re similar, but different.

The root of their common name

The sharing of SDN and SD-WAN nomenclature is due to the separation of their Control and Data Planes, which makes them, along with many other benefits, easier to deploy and manage. With both SDN and SD-WAN, the Control Plane, which directs traffic, isn’t in the equipment’s firmware, but in software, which allows for ease of management from a central location. Without that separation, equipment must be accessed and manually configured for each location. And to do that, a level of technical expertise is needed, so thoughts of having an office manager try and configure a router is, well… Let’s just say it’s not going to happen. Flights and hotel stays ensue, so the travel costs alone for implementing an MPLS network with dozens of branch locations are exorbitant. Now add in the high costs of MPLS circuits and the long wait times for provisioning, and you’re looking at an expensive, time insensitive wide area network.

Different career paths

As is the case with many fathers and sons, SDN and SD-WAN have chosen a different career path. Each has its own specialty: SDN for local area networks, data centers and service providers’ core networks, and SD-WAN to augment, or replace, MPLS-based wide area networks. Through Network Function Virtualization (NFV), SDN can be configured and programmed by the customer through software that was once held in closed, proprietary systems. SDN allows organizations to quickly and easily (and without disruption) adapt to ever-changing compute, storage and networking needs.

SD-WAN

There’s no question, the “cost savings” label is bestowed up SD-WAN more than SDN. As mentioned earlier, the savings to connect branch offices with SD-WAN are considerable when compared to MPLS. While a secondary Internet connection is needed, the low-cost, commoditized price of broadband is significantly less expensive than MPLS circuits. And it provides a lot more than cost savings. SD-WAN routers can bring locations online in a matter of minutes, as authentication and configuration is automated. It deftly steers traffic around network bottlenecks, and can be prioritized so latency-sensitive, high bandwidth applications can traverse accommodating network paths. And SD-WAN is carrier and transport agnostic, so different service providers can be selected by location, and traffic can be carried by the transport protocol of choice, whether 4G, Wi-Fi, even MPLS.

Call on the experts

While the benefits, and reasons, to move to SDN or SD-WAN are compelling, there are several issues and elements to consider prior to implementing either. That’s why it’s best to consult with software-defined solutions architects and engineers like those at GDT. They’re experienced at deploying cutting-edge, innovative solutions for some of the largest enterprise and service provider networks in the world. Contact them at SDN@gdt.com. They’d love to hear from you.

What is Digital Transformation?

We’ve all heard of it; we know our company should be striving to achieve it; but what exactly is…digital transformation?

Many people, at least those outside of the IT and telecommunications industries, may have been first introduced to the digital world through clocks or CD’s, leaving them with the question, “Haven’t we been digitally transformed for years?” Well, yes, in a sense, but when digital is used with transformation, it means something altogether different. In the simplest of definitions, digital transformation refers to how companies utilize technology to change:

  • the way their business operates,
  • how they engage their customers, and
  • how they become more competitive, and profitable, as a result.

This transformation accelerates positive change across all departments and provides, if done correctly, agility, efficiencies, innovation, and key analytics to help companies make more educated business decisions.

Becoming more competitive

Whether or not a company has a digital transformation strategy, they can be certain of one thing―their competitors do. Creating and implementing one is not easy, especially for companies who’ve enjoyed long term success. Here’s why: it requires them to do what will probably be very uncomfortable, even unconscionable―re-think processes and procedures that may have been in place and successful for decades, and even be prepared to scrap them, if necessary.

Digital transformation is somewhat like human factors engineering (AKA ergonomics), which forces companies to better understand, even feel, that end user experience. Companies need to, as author Steven Covey wrote in his book The 7 Habits of Highly Effective People, begin with the end in mind. They need to imagine how they’d like to engage customers, keep them engaged, and monetize that user experience. From there, they can begin to reverse engineer what it will take to get there (yes, that’s where it gets really challenging).

The move toward edge devices

Edge devices, of course, refer to the point at which a network is accessed. Ask a 60-year-old network engineer what he considers to be an edge device, and he’ll probably list routers, switches, multiplexers, et al.—all of the equipment that provides access to LANs (token ring, ethernet) and WANs, which support a wide array of technologies, such as frame relay, ISDN, ATM and MPLS. Lower that age group and respondents will probably think IoT, then list off smart phones, tablets and sensors, such as doorbells, thermostats and security systems—basically, anything that runs iOS, Android or Linux, and has an IP address.

So how are these edge devices an integral component of digital transformation? Well, they represent the sundry ways customers can enjoy an enhanced end user experience. And while customers are enjoying that better experience, the company, in turn, is accessing vital information and key analytics to help them make more impactful and better-educated business decisions. The result? Enhanced, targeted marketing, happier and more well-informed customers, operational efficiencies enjoyed by multiple departments and business units, and, of course, higher revenue.

Digital Transformation

The next time somebody asks you about digital transformation or what it means, you’ll know what to say in under twenty-five (25) words: “Digital transformation is the utilization of technology to enhance the end user experience, transform business processes and greatly advance value propositions.”

For more information about how your organization can develop or enhance its digital transformation journey, call on the expert solutions architects and engineers at GDT. For years they’ve been helping customers of all sizes, and from all industries, realize their digital transformation goals. Contact them at SolutionsArchitects@gdt.com. They’d love to hear from you.

Enjoy the Savings (including those of the soft variety) with SD-WAN

Sure, there are many, many benefits of utilizing SD-WAN that go well beyond cost savings, but the dollar signs tend to get the most press (big surprise). But savings aren’t limited to costs reflected solely within IT budget line items―they stretch far and wide, and include, as a byproduct, many soft cost savings that organizations of all sizes, and from all industries, are currently enjoying with SD-WAN.

Hard Cost Savings

Bandwidth

Hard cost savings are certainly the easiest to calculate; they’re the ones reflected in the lower bills you’ll receive from your MPLS provider, like AT&T, CenturyLink, Charter Spectrum, et al. Connecting branch offices with MPLS isn’t cheap, and provisioning them can also be expensive in terms of time. New circuits or upgrades can easily take weeks to accomplish, and who has time for that? Sure, MPLS offer excellent QoS (quality of service) and is a very stable, reliable technology, but SD-WAN has come a long, long way to address requirements like QoS. And if offices, especially those of the smaller, remote variety, aren’t running real-time applications and are accessing them via the Cloud, SD-WAN is ideal.

For SD-WAN, another Internet circuit is needed to run as a companion to your existing one. And if you haven’t noticed, the cost for dedicated, high bandwidth Internet circuits is crazy inexpensive, especially when compared to an MPLS circuit that delivers comparable bandwidth. Neither Internet connection stands by idle, and both are hard at work to satisfy your networking needs. SD-WAN automatically looks for, and steers your traffic around, bottlenecks in the network that could cause jitter, latency and, of course, dropped pockets.

Hardware

Having a dedicated router at a branch office might make more sense from a cost standpoint if it’s supporting dozens, even hundreds, of employees, but it becomes more and more cost-prohibitive as those numbers go down. Moving to a SaaS (software-as-a-service) model means getting away from upfront, capital expenditures and moving them to a more budget-friendly, pay-as-you-go cloud model. That’s not to say that new hardware doesn’t need to be deployed for SD-WAN, but SD-WAN routers are highly flexible, simpler (they include traditional routing and firewall capabilities) and less expensive than a traditional router. Oh, and they’re much smaller―for instance, Viptela’s SD-WAN vEdge routers are all 1RU (less than 2” tall). Also, they’re compatible with traditional routers, so there’s no need to yank them out and set them next to the dumpster just yet.

The Harder-to-Calculate Soft Cost Savings

Soft costs are often overlooked, primarily because they’re harder to calculate. But there’s no question that SD-WAN, if implemented correctly, can result in a lot of soft costs (like those listed below) that should definitely be calculated and taken into consideration.

Productivity

Consider the productivity your organization can lose waiting for an MPLS circuit to be delivered or upgraded. And there’s also the potential for network downtime due to the provisioning of an MPLS circuit, which is a very real possibility. And troubleshooting those circuits, whether they’re new or are experiencing issues, takes time―often lots of it.

Travel Costs

With SD-WAN, the days are gone when a member of your IT staff has to travel to a branch location to install and configure a router. SD-WAN allows new sites to quickly and easily be turned up, and done so within a matter of minutes.

Performance

With a secondary Internet circuit installed, SD-WAN can easily and automatically re-route traffic in the event one (1) of the circuits goes down. With MPLS, cloud-based applications are usually backhauled directly to the data center first, after which they’re handed off to the Internet. This can add latency and reduce performance. Not so with SD-WAN.

Time

SD-WAN is carrier neutral, and can be utilized by the transport protocol of your choosing, whether 4G, MPLS, Wi-Fi, etc. And you don’t have to worry about securing a circuit from only one (1) service provider, which provides far greater flexibility. And SD-WAN provides the ability to monitor all circuits, regardless of service provider.

Got questions? GDT’s expert SD-WAN network architects have answers

The SD-WAN experts at GDT have implemented SD-WAN solutions for organizations of all sizes. They know how to implement a solution that not only provides savings, both hard and soft, but delivers the many benefits SD-WAN can provide. Contact them at SDN@gdt.com. They’d love to hear from you.

Who doesn’t want a turnkey, integrated backup solution?

That’s exactly what you’ll get with Dell EMC’s Integrated Data Protection Appliance

Two words: data protection. There probably isn’t a more important combination of words in the IT industry. Obviously, Dell EMC agrees―their latest IDPA (integrated data protection appliance) is a turnkey, pre-integrated appliance that brings together protection storage, search and analytics, and across a wide array of applications and platforms.  And with Dell EMC’s new capabilities that address cloud data protection, critical information can be backed up from anywhere in the world, and at any time.

Listening to the marketplace

Through, primarily, empirical knowledge, Dell EMC designed their DP4400 to address what the marketplace needed: simplicity. And that’s exactly what it is―the DP4400 is a single, stand-alone, 2U appliance that not only provides considerable turnkey (and easily upgradeable) storage, but is also very affordable when compared to competing products.

Cloud Ready

Cloud features are built into the DP4400, and no cloud gateways are needed. It not only provides data protection, but natively extends the same level of protection to the cloud. Cloud Disaster Recovery (DR) and Long-Term Retention (LTR) are built into the DP4400, and add-ons are not only easy to deploy, but scalable.

And speaking of LTR, Dell EMC guarantees 55:1 deduplication to a private, public or hybrid cloud, and the DP4400 affords for the management of up to 14 petabytes (PB) of capacity (yep, that’s on a single DP4400). What does this really mean? It means that managing virtual or physical tape libraries is a thing of the past.

VMware (a Dell EMC company)

The DP4400 is optimized for VMware, which Dell picked up as part of its purchase of EMC in 2016. Automation is provided across the entire VMware data protection stack, including VM backup policies and automation.

And there’s more…

The DP4400 is:

  • Customer-installable/upgradable, and a 2U appliance (ah, that means it’s small―3 ½ inches tall),
  • “Grow-in-place” rich (24-96TB), and requires no additional hardware,
  • Capable of providing up to 2x shorter backups,
  • Requires up to 98% less bandwidth, and
  • Comes with a 3-year satisfaction guarantee and up to 55:1 data protection deduplication guarantee through Dell EMC’s Future-Proof Loyalty Program.

This only scratches the surface of what Dell EMC’s DP4400 IDPA can bring to your organization. For more information, contact info@gdt.com.

Enjoy on-prem benefits with a public cloud experience

If you listen closely, you can practically hear IT professionals the world over asking themselves the same question―“If I utilize the public cloud, how can I maintain control and enjoy the security I get from on-premises infrastructures?” And if that question does indeed steer them away from cloud services, they’re left with the ongoing, uneasy feeling that comes from overprovisioning capacity and long-awaited circuit upgrades.

HPE has the answer to this IT conundrum

HPE GreenLake Flex Capacity is a hybrid cloud solution that provides customers with a public cloud experience and the peace of mind that can come with on-premises deployments. Like cloud services, HPE GreenLake Flex Capacity is a pay-as-you-go solution that offers capacity on-demand and quickly scales to meet growth needs, but without the (long) wait times associated with circuit provisioning.

And with HPE GreenLake Flex Capacity, network management is greatly simplified, as customers can manage all cloud resources in the environment of their choosing.

HPE GreenLake Flex Cap’s many benefits include…

  • Limitation of risk (and wracked nerves) by maintaining certain workloads on-prem
  • Better alignment of cash flows with your business due to no upfront costs and a pay-as-you-go model
  • No more wasteful circuit overprovisioning
  • Rapid scaling, which provides an ability to immediately address changing network needs
  • Receipt of real-time failure alerts and remediation recommendations that provide vital, up-to-date information
  • Ability to right-size capacity

And combined with HPE Pointnext…

HPE GreenLake Flex Cap delivers availability, reliability and optimization, and lets customers’ IT professionals concentrate on the initiatives and projects that will help shape their company’s future. And HPE’s services organization, Pointnext, can not only monitor and manage the entire solution, but provides a customer portal that delivers key analytics, including detailed consumption metrics.

 Questions? Call on the experts

If you have additional questions or need more information about HPE GreenLake Flex Capacity and the many benefits it can provide your IT organization, contact Pam Bull, GDT’s HPE point of contact, at pam.bull@gdt.com. She’d love to hear from you.

 

 

 

 

How SD-WAN can enhance application performance

Remember the days when a new software application meant downloads, licenses, and minimum RAM and processing power requirements? Or when applications resided in a corporate data center and were accessed over expensive, leased lines from service providers, only then to be handed off to the Internet? Expensive, inefficient, and prone to latency―not a good networking triad. And direct Internet access can be fraught with issues, as well, leaving end users with unpredictable, inconsistent application performance and a spate of trouble tickets left in their wake.

Hello SD-WAN―a friend to the application. While content is king in the marketing world, applications enjoy a similar, regal role in the business world. It’s estimated that each worker uses between 5.5 and 8 different computer-based applications each day, and another 7 to 10 of the mobile variety. An inability to access any one of them can quickly derail your, and your company’s, day. Here are the many ways SD-WAN can enhance your organization’s mission critical applications:

Sidestep the bottlenecks

SD-WAN is similar to traffic reports on drivetime radio, only better―much better. Imagine that your car hears the traffic report, then automatically steers you around the construction without you even having any knowledge that any traffic snarls existed. SD-WAN is similar and continually searches for bottlenecks in the network (packet drop, jitter and latency), after which the best, least congested route is selected.

Prioritize traffic by application

In SD-WAN, policies can be set up so certain applications traverse select network paths with less latency and greater bandwidth. And, conversely, lower priority traffic, such as backups or Internet browsing, can be delivered via less expensive and/or less reliable connections.

Fast access

With SD-WAN, new sites can be turned up in a matter of minutes, enabling users quick access to applications. When an SD-WAN edge appliance is plugged in, it automatically connects, authenticates and receives configuration information.

Centralized policy management

Priorities can be centrally managed for each application based on any number of policies, such as QoS, reliability, security and visibility. Also, this prioritization can be designated by users, dates, times or office locations.

SLA adherence

With SD-WAN, companies can set up policies per application, including respective SLA criteria (packet loss, jitter, latency), so particular applications are only directed over the connections that meet the SLA requirements. And if that connection goes down, the traffic can be re-routed to meet SLAs, even if it means being routed over a broadband or MPLS link.

It’s Transport―and carrier―agnostic

Because SD-WAN is a virtual WAN, it can be utilized by the transport protocol of your choosing, such as MPLS, 4G, Wi-Fi, et al. And there’s no longer a need to lease lines from only one (1) service provider, which provides customers far greater flexibility, including the ability to monitor circuits regardless of the service provider.

Before you go all in on SD-WAN…

…engage the GDT SD-WAN expert solutions architects and engineers at SDN@gdt.com. They’re experienced at providing SD-WAN solutions for companies of all sizes.

Is SD-WAN the same as WAN Optimization?

Aside from the list of positives you’ve likely heard about SD-WAN (and there are many), there’s one thing it isn’t―WAN Optimization. Many incorrectly use SD-WAN and WAN Optimization interchangeably. That isn’t to say SD-WAN doesn’t greatly optimize networks, just that it’s not technically WAN Optimization, which was introduced roughly fifteen (15) years ago when WAN circuits were, well, pricey.

WAN Optimization refers to techniques and technologies that enable data traversing the network to get maximized, which allows, basically, companies to get the most out of their legacy networks that still utilize WAN connections from telco providers, such as AT&T, Charter Spectrum, Level 3, and the like. Fifteen (15) years ago WAN Optimization was all the rage. Bandwidth requirements outgrew many of the IT budgets companies set aside to upgrade WAN connections, so WAN Optimization was the answer. Through caching and protocol optimization, end users could download cached information from a file that had already been downloaded. In short, it squeezed as much bandwidth juice from the WAN as possible.

It worked well for some traffic, but not all, and required dedicated hardware at headquarters and each remote location (then came the management and maintenance…). But bandwidth costs began to drop―precipitously―and having Gig connections became both commonplace and affordable.

Sounds like the death of WAN Optimization, right?

Not so fast. If you surmised that cheaper, commoditized bandwidth and SD-WAN teamed up to toss WAN Optimization onto the scrapheap, you’ve surmised incorrectly. No question, the wallet-friendly cost of broadband and, of course, SD-WAN have reduced the desire for WAN Optimization, but not the need for it. WAN Optimization can serve as an impactful supplement to SD-WAN, and can allow you to make the most out of your infrastructure by:

  • Reducing latency as a result of very wide area networks, meaning those that span long distances.
  • Compressing data to address TCP/IP protocol limitations and satisfy stringent QoS requirements.
  • Addressing congestion due to limited bandwidth, which can limit SD-WAN’s ability to more quickly re-route traffic.
  • Handling slower, chattier protocols more efficiently.

Call on the experts

If you have questions about how SD-WAN can be utilized to bring its many benefits to your organization, like enhanced application performance, less complexity, greater flexibility and reduced network costs, contact GDT’s team of experienced SD-WAN solutions architects and engineers at SDN@gdt.com. They’d love to hear from you.

Cisco HyperFlex runs point on customers’ hyperconverged journeys

The term hyperconvergence has been getting a lot of press in the last few years, and rightly so. It provides pretty much everything that legacy IT infrastructures don’t―flexibility, scalability and simplicity. It enables, in a single system, the management of equipment to handle a wide range of workloads, such as database management, collaboration, packaged software, such as SAP and Oracle, virtual desktop, analytics, web servers, and more. It’s software-defined, which is another way of saying quicker network provisioning, more control and visibility, and less downtime.

Cisco Hyperflex

HyperFlex, Cisco’s answer to hyperconvergence, is being successfully utilized by a wide range of industries. The following are a few of the many ways in which organizations of all sizes are enjoying Cisco HyperFlex:

Virtual Desktops

There was a time, not too long ago, when companies couldn’t pull the trigger on a virtual desktop solution due to the high upfront costs. Sure, they loved the idea, but just couldn’t make it fit into their budget. Hyperflex not only addresses the prohibitive cost issue, but does so by successfully tackling another one that organizations investigating a virtual desktop infrastructure (VDI) were faced with―complexity.

Branch of Remote Offices

Whether through organic growth or due to a merger or acquisition, one thing is certain―your organization’s IT needs today will soon look different. So whether growth includes more employees, more locations, or both, HyperFlex allows for an easy way to deploy hardware wherever it’s needed while being managed from a central location.

Server Virtualization

With HyperFlex, virtual server resources can be reallocated as needed to address the changing demands on storage, compute, and networking. Legacy systems require different approaches to each (see Complexity).

DevOps

Developers are always under the gun to rapidly roll out solutions to address ever-evolving business needs. Without hyperconvergence, however, their job to do so is much more taxing, as hardware provisioning needs to be separately considered for storage, networking, virtualization and compute. This is exacerbated because Agile project management and development requires regular, on-going testing and remediation. With Cisco HyperFlex, virtualized hardware can be easily configured to accommodate frequent revisions and testing.

Cisco HyperFlex provides Software-Defined…

…Compute. Cisco’s Unified Computing System (Cisco UCS) is the foundation on which Hyperflex is built, and provides an easy, single point of management so resources can be easily adjusted to address the shifting needs of businesses.

…Storage. Cisco’s HyperFlex HX Data Platform software is a super high-performance file system that supports hypervisors (Virtual Machine Monitor (VMM)) with optimization and data management services.

…Networking. Cisco’s UCS provides a highly adaptive environment that offers easy integration with Cisco Application Centric Infrastructure (Cisco API), which is Cisco’s software-defined networking (SDN) solution that delivers hardware performance with software flexibility.

Call on the experts

To find out more about Cisco HyperFlex and what hyperconvergence can do for your organization, contact GDT’s hyperconvergence experts at SolutionsArchitects@gdt.com. They’d love to hear from you.

 

 

 

Why Companies are Turning to Mobility Managed Solutions (MMS)

If mobility isn’t one of the most used words of the past ten (10) years, it’s got to be a close second. And mobility is no longer just about using Smart phones or tablets to purchase Christmas presents and avoid trips to the shopping mall. Mobility is transforming the way businesses operate, how their employees collaborate, and, ultimately, how it can generate more revenue. With the rapidly increasing implementation of BYOD (Bring Your Own Device), companies need to ensure that connectivity is fast, reliable and provides seamless, highly secure connectivity. And with the Internet of Things (IoT), companies can now offer customers immediate value and utilize advanced data analytics to better understand buyers’ tendencies and purchasing behaviors.

With so much at stake, it’s critical that companies carefully develop a mobility strategy that helps employees optimize their time and ultimately deliver bottom line results. Following are some of the many reasons why companies are turning to MMS providers to ensure they’ll get the most out of their mobility solutions.

Skillsets

Counting on your existing IT staff to have the necessary skillsets in place to create, then implement, a mobility strategy could end up costing your organization considerable time and money. Having them attempt to ramp up their mobility education is fine, but it lacks one key component―experience. You wouldn’t have a surgeon with no prior hands-on experience operate on you or a loved one. Why do the same with your company’s mobility strategy?

Resources

Lack of experience goes hand-in-hand with poor time management. In other words, the less experience, the longer it will take. And pulling existing IT staff off other important key initiatives could mean putting projects on hold, if not cancelling them altogether. And the time it takes to remediate events that have occurred due to the lack of empirical knowledge will only exacerbate the issue.

Security

With the ever-increasing demands for mobility solutions and applications, ensuring that company data is critically protected can’t be overlooked or handled piecemeal. Doing so will leave you in reactive, not proactive, security mode. Mobile security is being enhanced and improved on a regular basis, but without the needed expertise on staff, those security enhancements could fall on deaf ears. Also, an experienced Mobility Managed Solutions provider can help you set needed security policies and guidelines.

Maximizing Employee Productivity

One of the key reasons companies develop and enhance mobility solutions is to help ensure employee productivity is maximized. Not conducting fact-finding interviews with different departments to understand their existing and evolving demands will mean your mobility strategy is only partially baked. And trying to retro-fit solutions to address overlooked elements will result in additional time and unnecessary costs.

Monitoring

Mobility solutions aren’t a set-it-and-forget-it proposition. They must be managed, monitored and optimized on a regular basis. Updates need to maintained and administered. And as with any new technology roll-out, there will be confusion and consternation, so technical support needs to be prepped and ready before trouble tickets start rolling in.

Best Practices

There are a number of best practices that must be considered when developing and implementing mobility solutions. Are you in a heavily-regulated industry and, if so, does it adhere to industry-related mandates? Have mobile form factors and operating systems been taken into consideration? Will roll-out be conducted all at once or in a phased approach? If phased, have departmental needs been analyzed and prioritized? Have contingency plans been developed in the event roll-out doesn’t perfectly follow the script you’ve written?

Costs

Lacking the mobility experience and skillsets on staff could mean unnecessary costs are incurred. In fact, studies have shown that companies utilizing a MMS provider can save anywhere from 30 to 45% per device.

Experienced Expertise

Each of the aforementioned regarding mobility solutions are critically important, but all fall under one (1) primary umbrella―experience. You can read a book about how to drive a car, but it won’t do you much good unless you actually drive a car. It’s all about the experience, and mobility solutions are no different. Hoping you have the right skillsets on staff and hoping it will all work out are other ways of saying High Risk. Hope is not a good mobility solutions strategy.

If you have questions about your organization’s current mobility strategy, or you need to develop one, contact GDT’s Mobility Solutions experts at Mobility_Team@gdt.com. They’re comprised of experienced solutions architects and engineers who have implemented mobility solutions for some of the largest organizations in the world. They’d love to hear from you.

GDT hosts VMware NSX Workshop

 

On Thursday, June 28th, GDT hosted a VMware NSX workshop at GDT’s Innovation Campus. It was a comprehensive, fast-paced training course that focusds on installing, configuring, and managing VMware NSX™. It covered VMware NSX as a part of the software-defined data center platform, including functionality operating at Layers 2 through 7 of the OSI model. Hands-on lab activities were included to help support attendees’ understanding of VMware NSX features, functionality, and on-going management. Great event, as always!

 

GDT Lunch & Learn on Agile IoT

On Tuesday, June 19th, GDT Associate Network Systems Engineer Andrew Johnson presented, as part of the GDT Agile Operations (DevOps) team’s weekly Lunch & Learn series, info about the wild world of IoT (Internet of Things). Andrew provides a high level overview of what IoT is and what can be done when all things are connected.  As more and more devices get connected, the ability to draw rich and varied information from the network is changing how companies, governments and individuals interact with the world. 

Why this market will grow 1200% by 2021!

According to an IDC report that was released in 2017, it was predicted the SD-WAN market would grow from a then $700M to over $8B by 2021. They’ve revised that figure. Now it’s over $9B.

SD-WAN is often, yet incorrectly, referred to as WAN Optimization, but that’s actually a perfect way to describe what SD-WAN delivers. The sundry WAN solutions of the past twenty-five (25) years―X.25, private lines (T1s/DS3s) and frame relay―gave way to Multi-Protocol Label Switching (MPLS) in the early 2000’s.

MPLS moved from frame relay’s Committed Information Rate (CIR)―a throughput guarantee―and offered Quality of Service (QOS), which allows customers to prioritize time sensitive traffic, such as voice and video. MPLS has been the primary means of WAN transport over the last fifteen (15) years, but SD-WAN provides enterprises and service providers tremendous benefits above and beyond MPLS, including the following:

Easier turn-up of new locations

With MPLS, as with any transport technology of the past, turning up a new site or upgrading an existing one is complex and time consuming. Each edge device must be configured separately, and the simplest of changes can take weeks. With SD-WAN, setting up a new location can be provisioned automatically, greatly reducing both time and complexity.

Virtual Path Control

SD-WAN software can direct traffic in a more intelligent, logical manner, and is also, like MPLS, capable of addressing QoS. SD-WAN can detect a path’s degradation and re-route sensitive traffic based on its findings. Also, having backup circuits stand by unused (and costing dollars, of course) is a thing of the past with SD-WAN.

Migration to Cloud-based Services

With traditional WAN architectures, traffic gets backhauled to a corporate or 3rd party data center, which is costly and reduces response times. SD-WAN allows traffic to be sent directly to a cloud services provider, such as AWS or Azure.

Security

SD-WAN provides a centralized means of managing security and policies, and utilizes standards-based encryption regardless of transport type. And once a device is authenticated, assigned policies are downloaded and cloud access is granted―quick, easy. Compare that to traditional WANs, where security is handled by edge devices and firewalls. Far more complex and costly.

…and last, but not least

SD-WAN can greatly reduce bandwidth costs, which are often the greatest expense IT organizations incur, especially if they’re connecting multiple locations. MPLS circuits are pricey, and SD-WAN can utilize higher bandwidth, lower cost options, such as broadband or DSL.

Does SD-WAN mark the end of MPLS?

Given the stringent QoS demands of some enterprise organizations, and the fear that SD-WAN won’t be able to accommodate them, it’s unlikely that SD-WAN will totally replace MPLS. And some organizations are simply averse to change, and/or fear their current IT staff doesn’t have the necessary skillsets to successfully migrate to SD-WAN, then properly monitor and manage it moving forward.

Call on the SD-WAN experts

To find out more about SD-WAN and the many benefits it can provide your organization, contact GDT’s tenured SD-WAN engineers and solutions architects at SDN@gdt.com. They’ve implemented SD-WAN solutions for some of the largest enterprise networks and service providers in the world. They’d love to hear from you.

Calculating the costs, hard and soft, of a cloud migration

When you consider the costs of doing business, you might only see dollar signs―not uncommon. But if your organization is planning a cloud migration, it’s important to understand all costs involved, both hard and soft. Sure, calculating the hard costs of a cloud migration is critically important―new or additional hardware and software, maintenance agreements, additional materials, etc.―but failing to consider and calculate soft costs could mean pointed questions from C-level executives will embarrassingly go answered. And not knowing both types of costs could result in IT projects and initiatives being delayed or cancelled—there’s certainly a cost from that.

When you’re analyzing the many critical cloud migration components―developing risk assessments, analyzing the effects on business units, applications and interoperability―utilize the following information to help you uncover all associated costs.

First, you’ll need a Benchmark

It’s important to first understand all costs associated with your current IT infrastructure. If you haven’t calculated that cost, you won’t have a benchmark against which you can evaluate and compare the cost of a cloud migration. Calculating direct costs, such as software and hardware, is relatively easy, but ensure that you’re including additional expenses, as well, such as maintenance agreements, licensing, warranties, even spare parts, if utilized. And don’t forget to include the cost of power, A/C and bandwidth. If you need to confirm cost calculations, talk with accounts payable―they’ll know.

Hard Costs of a Cloud Migration (before, during and after)

Before

Determining the hard costs related to cloud migrations includes any new or additional hardware required. That’s the easy part―calculating the monthly costs from cloud service providers is another issue. It has gotten easier, especially for Amazon Web Services (AWS) customers. AWS offers an online tool that calculate the Total Cost of Ownership (TCO) and Monthly Costs. But it’s still no picnic. Unless you have the cloud-related skillsets on staff, getting an accurate assessment of monthly costs might require you incur another, but worthwhile, hard cost―hiring a consultant who understands and can conduct a risk assessment prior to migration.

During

Cloud service providers charge customers a fee to transfer data from existing systems. And there might be additional costs in the event personnel is needed to ensure customers’ on-prem data is properly synced with data that has already been transferred. Ensuring this data integrity is important, but not easy, especially for an IT staff that is not experienced with prior cloud migrations.

After

Other than the monthly costs you’ll incur from your cloud provider of choice, such as AWS or Azure, consideration must be given to the ongoing maintenance costs of your new cloud environment. And while many of these are soft costs, there can be hard costs associated with them, as well, such as the ongoing testing of applications in the cloud.

The Hard-to-Calculate Soft Costs

If they’re not overlooked altogether, soft costs are seldom top-of-mind. Determining the value of your staff’s time isn’t hard to calculate (project hours multiplied by their hourly rate, which is calculated by dividing weekly pay by 40 (hours)), but locking down the amount of time a cloud migration has consumed isn’t easy. Now try calculating one that hasn’t taken place yet. There might be a cost in employee morale, as well, in the event the cloud migration doesn’t succeed or deliver as planned.

Consider the amount of time required to properly train staff and keep them cloud-educated into perpetuity―today’s cloud will look a lot different than future generations.

The testing and integrating of applications to be migrated takes considerable time, as well, and several factors must be considered, such as security, performance and scalability. Testing should also include potential risks that might result in downtime, and ensuring interoperability between servers, databases and the network.

Also, there’s a far greater than 0% chance your cloud migration won’t go exactly as planned, which will require additional man hours for proper remediation.

There are also soft costs associated with projects that are put on hold, especially if they delay revenue generation.

If questions exist, call on the experts

Here’s the great news―moving to the cloud, provided the migration is done carefully and comprehensively, will save considerable hard and soft costs now and in the future. Calculating the costs of a cloud migration is important, but not an easy or expeditious venture.

If you have questions about how to accurately predict the costs of a future cloud migration, contact GDT’s Cloud Experts at AWSTeam@gdt.com. They’d love to hear from you.

A Fiber Optic First

It’s one of those “Do you remember where you were when…?” questions, at least for those at least fifty-years-old. And it didn’t just affect those in northern, hockey-friendly states. People as far south as Texas stopped their cars at the side of the road and began honking their car horns, then breaking into The Star-Spangled Banner and America the Beautiful while passing motorists sprayed them with wet grime. It was Friday, February 22nd when radios nationwide announced that the impossible had occurred at the 1980 Winter Olympics in Lake Placid, NY—the United States hockey team, comprised primarily of college-aged amateur athletes, had just defeated the Soviet Union Red Army team, considered by most familiar with the sport as the best hockey team of all time.

The closing seconds, announced worldwide by legendary sportscaster Al Michaels, became arguably the most well-known play-by-call in sports history:

11 seconds, you’ve got 10 seconds, the countdown going on right now! Morrow, up to Silk. Five seconds left in the game. Do you believe in miracles? YES!”

Legendary for several reasons

The game, which Sports Illustrated named the greatest sporting event in American history, is legendary for other reasons, as well. The TV broadcast of the game actually occurred later that evening during prime time on ABC, and was part of the first television transmission that utilized fiber optics. While it didn’t deliver the primary TV transmission, it was used to provide backup video feeds. Based on its success, it became the primary transmission vehicle four (4) years later at the 1984 Winter Olympics in Lilliehammer, Norway.

Why fiber optics will be around―forever

It’s no wonder fiber optics carries the vast majority of the world’s voice and data traffic. There was a time (late 1950’s) when it was believed satellite transmission would be the primary, if not exclusive, means for delivering worldwide communications. It wasn’t the Olympics, but a 1959 Christmastime speech by President Eisenhower to allay Americans’ Cold War fears that was the first delivered via satellite. But if you’re a user of satellite television, you’ve certainly experienced network downtime that comes with heavy cloud cover or rain.

And wireless communications, such as today’s 4G technology (5G will be commercially available in 2020), requires fiber optics to backhaul data from wireless towers back to network backbones, which is then delivered to its intended destination via…fiber optics.

The question regarding fiber optics has been debated for years: “Will any technology on the horizon replace the need for fiber optics?” Some technologists (although there appears to be few) say yes, but most say no―as in absolutely no. Line of sight wireless communications are an option, and have been around for years, but deploying them in the most populated areas of the country―cities―is impractical. If anything stands between communicating nodes, you’ll be bouncing your signal off a neighboring building. Not effective.

Facebook will begin trials in 2019 for Terragraph, a service they claim will replace fiber optics. Sure, it might in some places, such as neighborhoods, but is only capable of transmitting data to 100 ft. or less. It’s the next generation of 802.11, but, while it’s capable of transmitting data at speeds up to 30 Gbps, it’s no option for delivering 1’s and 0’s across oceans.

Fiber is fast, it’s durable, and it lasts a long time. Yep, fiber optics will be around for a while.

Did you know?

  • Fiber optics can almost travel at the speed of light, and isn’t affected by EMI (electromagnetic interference).
  • Without electricity coursing through it, fiber optics doesn’t create fire hazards. And add to that fact―it’s green, as in eco-friendly green. And it degrades far less quickly than its coax and copper counterparts.
  • Fiber is incredibly durable, and isn’t nearly as susceptible to breakage than copper wire or coaxial cable. Also, fiber has a service life of 25-35 years.
  • There’s less attenuation with fiber, meaning there’s a greatly reduced chance it will experience signal loss.
  • With Dense Wave Division Multiplexing (DWDM), the fiber’s light source can be divided into as many as eighty (80) wavelengths, with each carrying separate, simultaneous signals.

Call on the experts

If you have questions about how optical networking can help your organization get the most out of optical networking, contact The GDT Optical Transport Team at Optical@gdt.com. They’re comprised of highly experienced optical engineers and architects, and support some of the largest enterprise and service provider networks in the world.

Migrating to the Cloud? Consider the following

First, follow Stephen Covey’s unintentional Cloud Migration advice

Stephen Covey, in his 1989 bestselling book The 7 Habits of Highly Effective People, lists “Begin with the end in mind” as the second habit. But in the event you’re considering a cloud migration for your organization, Covey’s second habit should be your first.

Yes, you must first fully understand the desired end results for moving to the cloud before you do so. Whether it’s cost savings, greater flexibility, more robust disaster recovery options, better collaboration options, work-from-anywhere options, automatic software and security updates, enhanced competitiveness in the marketplace, and better, safer controls over proprietary information and documentation, you need to ensure the precise goals are outlined and communicated so everybody in your organization understands the “end in mind.” There needs to be a carefully considered reason prior to your journey. You don’t get in the car and start driving without knowing where you want to go; why would you do it on your cloud journey?

Prior to any cloud migration, you must do exactly what you would prior to any other type of journey―go through your “To-Do” checklist. Without this level of scrutiny, your cloud migration will gloss over, if not totally exclude, key elements that need to be considered ahead of time. But not checking off necessary considerations prior to a cloud migration will be far more defeating than not packing your favorite pillow or a toothbrush. Trying to correct problems from a poorly planned cloud migration can cost considerable time, expense and credibility.

The following will give you an idea of the key questions that must be asked, and carefully considered and answered, prior to beginning your organization’s cloud journey.

What’s your Cloud Approach?

Will you be utilizing a public or private cloud model, or a combination (hybrid) of the two (2)? Will you maintain certain apps on-premises or in a data center, and be using more of a Hybrid IT approach? The answer to these questions involves several key elements, including, to name a few, existing licenses, architectures and transaction volume. And considering the “6 R’s” regarding Cloud migrations will greatly assist in helping you develop the right Cloud Approach:

Remove

Empirically speaking, it’s not uncommon for organizations to discover that as much as 20-30% of their current applications aren’t being utilized and are prime candidates for total shut down.

Retain

Determine which applications should remain managed on-prem. For instance, certain latency- or performance-sensitive applications, or any that involve sensitive and/or industry-regulated data, might not be right for the cloud. There are several applications that are simply not supported to run in the cloud, and some require specific types of servers or computing resources.

Re-platform

Which applications will benefit by moving to, once migrated to the Cloud, a different platform to save time and hassles related to database management. Amazon Relational Data Service (Amazon RDS) is a database-as-a-service (DBaaS) that makes setting up, operating and scaling relational databases in the cloud much easier.

Re-host

Often referred to as “lift and shift”), moving certain applications to the Cloud can often more easily be accomplished with existing automation tools, such as AWS’s VM Import/Export).

Re-purchase

Which current applications can be replaced and utilized in the Cloud (SaaS)?

Refactor

If scaling, enhanced performance, or adding new features can be accomplished via a Cloud Migration, they might need to be re-factored or -architected.

What’s the Prioritization Order of Applications that will be Migrated?

It probably won’t come as a surprise to hear that the least critical applications should be migrated first. Start with applications that won’t leave your entire organization hamstrung if down or inaccessible, and work up from there. Subsequent, more critical application migrations will benefit from the prior experience(s).

Are Security Concerns being considered?

Think about each of the network security demands and policies that must be closely monitored and adhered to. How will they be affected from a cloud migration? Think about any industry-related requirements, such as HIPAA, PCI and those mandated by FERC or the FTC? As data migrates to the public cloud, so changes in governance strategies will probably need to be addressed.

Are the Needed Cloud Migration Skillsets on staff?

Trying to retrofit existing IT personnel with a slew of quick-study certifications will leave one important element out of the equation―experience. Think of it this way; you can read a book about swimming, but it doesn’t really mean much until you get in the water. So, if your staff has only read about cloud migrations, you’ll probably want to turn to somebody who’s been in the cloud migration water for years. And doing so will help educate your staff, even provide them with the confidence to test new approaches.

Have costs been carefully considered?

Ask IT personnel why they’re moving to the cloud, and if “to save costs” isn’t mentioned first, it soon will be. Yes, moving to the cloud can save considerable costs (if done correctly), but no two (2) environments are alike when it comes to the degree of savings moving them to the cloud will deliver. In fact, some legacy applications might cost more if moved to the cloud. And additional bandwidth and associated costs must be taken into consideration, as well. Also, make sure you understand how licensing for each application is structured, and whether the licensing is portable if moved to the cloud.

Call on the experts

Moving to the cloud is a big journey, and doing so could be one of the biggest in your career. The question is, “Will it be a positive or negative journey?” Turning to experienced Cloud experts like those at GDT can point your cloud migration needle in a positive direction. They hold the highest levels of Cloud certifications in the IT industry, and can be reached at AWSTeam@gdt.com. They’d love to hear from you.

 

 

 

 

 

 

 

 

 

 

 

 

Are you Cloud-Ready?

Let’s face it, moving to the cloud is sexy. It’s the latest thing―at least as far as the general public is concerned―and proudly stating “We’re moving everything to the cloud” sounds modern, cutting-edge, even hip (if you want to impress people at a cocktail party, inform them that the concept has actually been around for fifty (50) years. The Cloud’s real impact, however, was felt in the late 1990’s when Salesforce came onto the scene and began delivering an enterprise application to customers via their website). Yes, everybody, it seems, wants to move to the cloud.

While many might feel their organization is cloud-ready, the truth is most are not. It seems and sounds so simple to move applications to the cloud (you just log into a website and start using the application, right?), but a lot of preparation, interviews and fact-finding must be conducted ahead of time.

The following are a list of questions you should ask yourself prior to a cloud migration. If companies don’t ask themselves, and be able to answer, the following questions, their cloud migration will leave them wondering if moving to the cloud was such as great idea in the first place.

Why are you moving to the Cloud?

If “Because it’s the thing to do” is your answer, even if you’re too embarrassed to state it publicly, it’s time to give the question deep-diving, considerable thought. And “Because it will save costs” isn’t enough prep, either. The Cloud offers many benefits, of course, but to fully realize them requires extensive knowledge regarding how to get them. If cloud migrations are completed correctly and comprehensively, your organization can enjoy greater flexibility, more robust disaster recovery options, capital expenditure savings, more effective collaboration, work-from-anywhere options, automatic software and security updates, enhanced competitiveness in the marketplace, and better, safer controls over proprietary information and documentation. But to get any or all of those, your current environment first needs to be risk assessed.

Have you conducted a Risk Assessment?

Risk assessments are a critical component of cloud migrations. Consideration needs to be given to:

  • Savings, both in costs and time
  • How the cloud solution can, and will, be right-sized to meet the unique demands of your organization
  • The role automation, if needed, will play in your cloud deployment
  • How staff resources will be managed, including any previous cloud expertise and skillsets you have on staff
  • The ongoing monitoring of the cloud solution, and the ability to analyze usage and make necessary adjustments when needed (and they will be needed)
  • Security needs, including compliance with any industry-related regulations, such as, to name a couple, HIPAA and PCI
  • How sensitive data will be protected
  • Disaster recovery, including backups and auto-recovery
  • The ability to satisfy Recovery Point Objectives (RPOs) and Recovery Time Objectives (RTOs)

Failure to consider and satisfy any of the aforementioned could mean your cloud migration is doomed to fail. Again, a detailed, comprehensive risk assessment is a critical component that must be conducted prior to building a cloud migration strategy.

How will moving to the cloud affect business operations, not just IT?

Thinking outside the IT box is critically important. Interviews with key stakeholders from all business units―finance, marketing, accounting, project management, sales, DevOps, HR, etc.―need to be conducted to determine and understand their practices and goals, and how the cloud migration will affect, and enhance, them. A thorough analysis of the current environment needs to be conducted to understand how departments work interdependently. IT infrastructure, security, application dependencies and cost analysis needs to be considered for each.

Are my applications Cloud-Ready?

It’s important to understand which applications are well-suited to move to the cloud, including related options for each. Some applications should be moved to the cloud, some should be in a private cloud, and others shouldn’t, or can’t, be moved at all. Each organization has unique needs and requirements, and all need to be incorporated into a migration plan that both organizes and prioritizes them so desired results can be achieved. For instance, certain mission-critical applications probably shouldn’t be migrated first, as their downtime might bring the entire organization to a grinding and costly halt.

Call on the Experts

The many benefits of moving to the Cloud are achievable, but getting there requires a level of expertise and associated skillsets that most organizations don’t already have on staff. If you have questions about moving to the Cloud, regardless of the size of your organization and its associated infrastructure, contact the GDT Cloud experts at AWSTeam@gdt.com. They’d love to hear from you.

When SOC plays second fiddle to NOC, you could be in for an expensive tune

It’s not uncommon for people, even some IT professionals, to assume all of their organization’s security needs are being addressed through their NOC (Network Operations Center). Chances are, they’re not. NOCs and SOCs (Security Operations Centers) are entirely different animals, however, with varying goals and staffed by IT professionals with different skillsets and security-related industry certifications. Sure, they both identify issues, then work to resolve them, but most of the similarities end there.

In 2017, well over 4 billion records were exposed to cyberattacks. Believing your company is somehow shielded from them because it’s not of the Fortune 500 variety is a fool’s paradise. No company, regardless of its size or the industry within which it operates, is immune from threats. In a recent Global Information Security survey, only half of the participating organizations believed they could even detect or predict a cyberattack. Amazingly, many organizations view security as an afterthought, and cobble together a security plan with existing personnel who are ill-equipped to handle the intricacies and demands needed to fend off the bad guys―unfortunately, there are a lot of them.

The SIEM―what it is, and why it’s critically important

It can be argued that the SIEM (Security Information and Event Management system) is the fuel that makes the SOC engine run. It collects information from devices that are on or access the network, including login attempts and data transfers, then alerts security professionals of any potential threats. There was a time when SIEMs got a bad rap, some of it deservedly so. At one time, they generated a lot of false positives, which resulted in many “boy who cried wolf” scenarios. Many customers didn’t trust them to reliably provide usable information, at least on a regular basis, and quite possibly ignored alerts on actual threats. Thankfully, however, SIEMs have gotten far more accurate and reliable in recent years, in part because they now allow for far more customization, both in reporting and automated responses.

Don’t hand the SIEM reins over to anybody

Having a SIEM isn’t a set it and forget it proposition. Dealing with security threats is a digital cat and mouse game. New cyberattacks are being invented every day, and the types of threats, such as phishing, DDoS and Trojans (to name a few) are plentiful. And even if you provide extensive, internal training, you’ll never be able to fully you’re your company’s biggest threat―end users, many of whom have a seemingly innate ability to allow, even unknowingly invite, security threats onto the network.

Specialized Security Skillsets

It’s a security analyst’s job to understand the greatest asset threats, and understand which of the customer’s assets take the highest priorities. They can create mock attack scenarios to ensure the SOC can, and will, respond when real attacks occur. From this, they can better customize security detection and ensure responses are structured accordingly.

Threat Intelligence

A key element that security analysts provide is threat intelligence, which is the proactive understanding of existing threats or those on the horizon, including, of course, how to defend against them. Ask an IT professional about their organization’s threat management plan and mediations they have in place to address the vast array of existing or future threats, and you’ll probably be met with stunned silence. If they’re not well-versed in security, chances are existing and impending threats haven’t been considered. And if they haven’t been considered, it goes without saying that they’re not prepared to defend against them.

Plugging Security Gaps

Cybercriminals are essentially looking for one thing―vulnerabilities. Not fully understanding where network vulnerabilities exist can leave organizations wide open for attacks. Some of these vulnerabilities can be addressed with simple software patches, but if nobody on staff is closely monitoring and implementing them, you’ve made an unconscious decision to leave many security gaps unaddressed. It may or may not come as a surprise that most organizations don’t have a well-defined security patch management plan in place.

Monitored and Managed 24x7x365

Providing on-going, real-time management and monitoring of an organization’s endpoints, networks, services and databases 24×7 is critical when defending against threats. Your SOC is only as good as its weakest link, and if providing this level of security and scrutiny isn’t possible, you’ve just defined a very weak link. Threat detection and related responses must be timely, regardless of threat type, time of day or day of week.

For questions, call on the experts at GDT

Sure, companies can operate their own SOC, but whether it’s done in-house or with a 3rd party managed security solutions provider, it should be managed, maintained and monitored by tenured security analysts who think, live and breathe security. Anything less might soon leave you wondering why you ever thought a SOC could play second fiddle to the NOC. And security analysts, when combined with advanced automation solutions, will greatly enhance your defense against cyberattacks and security breaches.

For more information about GDT’s SOC Managed Services, or if you have questions about anything related to IT security, contact GDT’s security professionals here. They’d love to hear from you.

And if you’d like to better address some of your network security concerns, subscribe to GDT’s Vulnerability Alerts, which contain information and links to software patches.

 

 

 

 

GDT Lunch & Learn on Data Breaches–Protecting the Corporate Consumer

On Tuesday, May 22nd, GDT SOC Analyst Moe Janmohammad presented, as part of the GDT Agile Operations (DevOps) team’s weekly Lunch & Learn series, information about data breaches. They’re seemingly a weekly occurrence these days, and while there has been a lot of discussion around protecting consumers, very little is being done for the corporate purchaser.  Watch and learn how companies and individuals can understand what their risk profile is, and when and where they may have already been compromised.

GDT and QTS Enter Into Cloud and Managed Services Partnership

Agreement represents continued successful execution on QTS’ strategic growth plan

QTS Realty Trust (NYSE: QTS), a leading provider of software-defined and mega-scale data center solutions, today announced that it has entered into a strategic partnership with GDT, an international provider of managed IT solutions, representing a key step in QTS’ strategic growth plan announced in February 2018. Under the agreement, QTS will transition certain cloud and managed services customer contracts and support to GDT. QTS expects to complete its transfer of approximately 200 specific customers to GDT by the end of 2018.

Under the terms of the agreement, GDT will expand its colocation presence within QTS facilities to support customers as they are migrated to GDT’s platform. As GDT is an existing QTS partner and CloudRamp customer, QTS will facilitate a seamless integration with GDT through its Service Delivery Platform (SDP), which will provide customers enhanced visibility and control of their IT environments. Upon transition of the customers, GDT will maintain the current service level and support pursuant to the terms of each individual customer contract.

“We are pleased to partner with GDT, a leading managed IT provider and current QTS CloudRamp customer, to extend our hybrid solution capabilities while maintaining the consistent world-class service and support our customers have come to expect,” said Chad Williams, Chairman and CEO – QTS.

“This agreement also represents the next step in our strategic plan to accelerate growth and profitability,” Mr. Williams continued. “Consistent with our goal of narrowing the scope of cloud and managed services that we directly deliver, this partnership improves our ability to continue to deliver a differentiated hybrid solution, while unlocking enhanced profitability and future growth opportunities for QTS. Through SDP, we can enable a broader set of services for our customers through partner platforms including public cloud providers, Nutanix for Private Cloud, Megaport and Packetfabric for universal software-defined connectivity, and now GDT for managed hosting and other IT solutions.”

As part of the agreement, GDT will pay QTS a recurring partner channel fee based on revenue that is transitioned, as well as future growth on those accounts. While the financial benefit to QTS during the year will be relatively modest as the accounts are transitioned, this partnership arrangement is expected to support future revenue growth and profitability, beginning in 2019 and beyond, without significant cost to QTS. QTS expects that, in transitioning customer contracts to GDT, the Company will be able to drive accelerated leasing performance and growth, improve predictability in its business and significantly enhance overall profitability.

“We are pleased to expand our partner ecosystem with QTS, one of the leading innovators in the data center space,” said GDT CEO, JW Roberts. “This new partnership will greatly enhance our customer-first focus and our ability to consistently deliver innovative solutions to the IT industry. We look forward to managing a smooth customer transition and delivering additional value.”

In connection with today’s announcement, QTS also announced that the Company will issue its financial results for the first quarter ended March 31, 2018 before market open on Wednesday, April 25, 2018. The Company will also conduct a conference call and webcast at 7:30 a.m. Central time / 8:30 a.m. Eastern time. The dial-in number for the conference call is (877) 883-0383 (U.S.) or (412) 902-6506 (International). The participant entry number is 7555289# and callers are asked to dial in ten minutes prior to start time. A link to the live broadcast and the replay will be available on the Company’s website (www.qtsdatacenters.com) under the Investors tab.

About GDT 

Headquartered in Dallas, TX with approximately 700 employees, GDT is a global IT integrator and solutions provider approaching $1 Billion in annual revenue. GDT aligns itself with industry leaders, providing the design, build, delivery and management of IT solutions and services.

About QTS 

QTS Realty Trust, Inc. (NYSE: QTS) is a leading provider of data center solutions across a diverse footprint spanning more than 6 million square feet of owned mega scale data center space throughout North America. Through its software-defined technology platform, QTS is able to deliver secure, compliant infrastructure solutions, robust connectivity and premium customer service to leading hyperscale technology companies, enterprises, and government entities. Visit QTS at www.qtsdatacenters.com, call toll-free 877.QTS.DATA or follow on Twitter @DataCenters_QTS.

GDT launches GWEN to promote the advancement of women in IT

Dallas, TX – Dallas-based technology and systems integrator GDT announced the formation of GWEN, an organization designed to encourage and promote the personal transformation and advancement of women in Information Technology.

GWEN, which stands for GDT Women’s Empowerment Network, will work to promote equal opportunities for women and empower growth within the STEM (Science, Technology, Engineering and Mathematics) community. GWEN was created with the following four (4) initiatives in mind: mentorship and professional development, networking, education and community outreach.

According to Meg Gordon, GDT’s Vice President of Services Operations, “By utilizing GDT’s established business principles, GWEN will increase the impact of women in all aspects of business and technology.  This initiative gives women the ability to collaborate with other highly skilled, seasoned women professionals through focused leadership, visibility and recognition, networking opportunities, and a means of delivering personal and professional development programs.”

According to report from The World Economic Forum, “In Information and Communication Technology, a sector which struggles with talent shortages, only 37% of companies regard enhancing women’s workforce participation as an opportunity for expanding the talent pool.” GWEN aims to increase that percentage.

About GDT

Founded in 1996, GDT is an award-winning, international multi-vendor IT solutions provider and maintains high-level partner status with several of the world’s leading IT solutions and hardware providers, including HPE, Cisco and Dell EMC. GDT specializes in the consulting, designing, deploying, and managing of advanced technology solutions for businesses, service providers, government, and healthcare. The GDT team of expert architects and engineers maintain the highest level of certifications to translate the latest ideas and technologies into innovative solutions that realize the vision of business leaders.