Solutions Blog

How SD-WAN can enhance application performance

By Richard Arneson

Remember the days when a new software application meant downloads, licenses, and minimum RAM and processing power requirements? Or when applications resided in a corporate data center and were accessed over expensive, leased lines from service providers, only then to be handed off to the Internet? Expensive, inefficient, and prone to latency―not a good networking triad. And direct Internet access can be fraught with issues, as well, leaving end users with unpredictable, inconsistent application performance and a spate of trouble tickets left in their wake.

Hello SD-WAN―a friend to the application. While content is king in the marketing world, applications enjoy a similar, regal role in the business world. It’s estimated that each worker uses between 5.5 and 8 different computer-based applications each day, and another 7 to 10 of the mobile variety. An inability to access any one of them can quickly derail your, and your company’s, day. Here are the many ways SD-WAN can enhance your organization’s mission critical applications:

Sidestep the bottlenecks

SD-WAN is similar to traffic reports on drivetime radio, only better―much better. Imagine that your car hears the traffic report, then automatically steers you around the construction without you even having any knowledge that any traffic snarls existed. SD-WAN is similar and continually searches for bottlenecks in the network (packet drop, jitter and latency), after which the best, least congested route is selected.

Prioritize traffic by application

In SD-WAN, policies can be set up so certain applications traverse select network paths with less latency and greater bandwidth. And, conversely, lower priority traffic, such as backups or Internet browsing, can be delivered via less expensive and/or less reliable connections.

Fast access

With SD-WAN, new sites can be turned up in a matter of minutes, enabling users quick access to applications. When an SD-WAN edge appliance is plugged in, it automatically connects, authenticates and receives configuration information.

Centralized policy management

Priorities can be centrally managed for each application based on any number of policies, such as QoS, reliability, security and visibility. Also, this prioritization can be designated by users, dates, times or office locations.

SLA adherence

With SD-WAN, companies can set up policies per application, including respective SLA criteria (packet loss, jitter, latency), so particular applications are only directed over the connections that meet the SLA requirements. And if that connection goes down, the traffic can be re-routed to meet SLAs, even if it means being routed over a broadband or MPLS link.

It’s Transport―and carrier―agnostic

Because SD-WAN is a virtual WAN, it can be utilized by the transport protocol of your choosing, such as MPLS, 4G, Wi-Fi, et al. And there’s no longer a need to lease lines from only one (1) service provider, which provides customers far greater flexibility, including the ability to monitor circuits regardless of the service provider.

Before you go all in on SD-WAN…

…engage the GDT SD-WAN expert solutions architects and engineers at SDN@gdt.com. They’re experienced at providing SD-WAN solutions for companies of all sizes.

Is SD-WAN the same as WAN Optimization?

Aside from the list of positives you’ve likely heard about SD-WAN (and there are many), there’s one thing it isn’t―WAN Optimization. Many incorrectly use SD-WAN and WAN Optimization interchangeably. That isn’t to say SD-WAN doesn’t greatly optimize networks, just that it’s not technically WAN Optimization, which was introduced roughly fifteen (15) years ago when WAN circuits were, well, pricey.

WAN Optimization refers to techniques and technologies that enable data traversing the network to get maximized, which allows, basically, companies to get the most out of their legacy networks that still utilize WAN connections from telco providers, such as AT&T, Charter Spectrum, Level 3, and the like. Fifteen (15) years ago WAN Optimization was all the rage. Bandwidth requirements outgrew many of the IT budgets companies set aside to upgrade WAN connections, so WAN Optimization was the answer. Through caching and protocol optimization, end users could download cached information from a file that had already been downloaded. In short, it squeezed as much bandwidth juice from the WAN as possible.

It worked well for some traffic, but not all, and required dedicated hardware at headquarters and each remote location (then came the management and maintenance…). But bandwidth costs began to drop―precipitously―and having Gig connections became both commonplace and affordable.

Sounds like the death of WAN Optimization, right?

Not so fast. If you surmised that cheaper, commoditized bandwidth and SD-WAN teamed up to toss WAN Optimization onto the scrapheap, you’ve surmised incorrectly. No question, the wallet-friendly cost of broadband and, of course, SD-WAN have reduced the desire for WAN Optimization, but not the need for it. WAN Optimization can serve as an impactful supplement to SD-WAN, and can allow you to make the most out of your infrastructure by:

  • Reducing latency as a result of very wide area networks, meaning those that span long distances.
  • Compressing data to address TCP/IP protocol limitations and satisfy stringent QoS requirements.
  • Addressing congestion due to limited bandwidth, which can limit SD-WAN’s ability to more quickly re-route traffic.
  • Handling slower, chattier protocols more efficiently.

Call on the experts

If you have questions about how SD-WAN can be utilized to bring its many benefits to your organization, like enhanced application performance, less complexity, greater flexibility and reduced network costs, contact GDT’s team of experienced SD-WAN solutions architects and engineers at SDN@gdt.com. They’d love to hear from you.

Cisco HyperFlex runs point on customers’ hyperconverged journeys

The term hyperconvergence has been getting a lot of press in the last few years, and rightly so. It provides pretty much everything that legacy IT infrastructures don’t―flexibility, scalability and simplicity. It enables, in a single system, the management of equipment to handle a wide range of workloads, such as database management, collaboration, packaged software, such as SAP and Oracle, virtual desktop, analytics, web servers, and more. It’s software-defined, which is another way of saying quicker network provisioning, more control and visibility, and less downtime.

Cisco Hyperflex

HyperFlex, Cisco’s answer to hyperconvergence, is being successfully utilized by a wide range of industries. The following are a few of the many ways in which organizations of all sizes are enjoying Cisco HyperFlex:

Virtual Desktops

There was a time, not too long ago, when companies couldn’t pull the trigger on a virtual desktop solution due to the high upfront costs. Sure, they loved the idea, but just couldn’t make it fit into their budget. Hyperflex not only addresses the prohibitive cost issue, but does so by successfully tackling another one that organizations investigating a virtual desktop infrastructure (VDI) were faced with―complexity.

Branch of Remote Offices

Whether through organic growth or due to a merger or acquisition, one thing is certain―your organization’s IT needs today will soon look different. So whether growth includes more employees, more locations, or both, HyperFlex allows for an easy way to deploy hardware wherever it’s needed while being managed from a central location.

Server Virtualization

With HyperFlex, virtual server resources can be reallocated as needed to address the changing demands on storage, compute, and networking. Legacy systems require different approaches to each (see Complexity).

DevOps

Developers are always under the gun to rapidly roll out solutions to address ever-evolving business needs. Without hyperconvergence, however, their job to do so is much more taxing, as hardware provisioning needs to be separately considered for storage, networking, virtualization and compute. This is exacerbated because Agile project management and development requires regular, on-going testing and remediation. With Cisco HyperFlex, virtualized hardware can be easily configured to accommodate frequent revisions and testing.

Cisco HyperFlex provides Software-Defined…

…Compute. Cisco’s Unified Computing System (Cisco UCS) is the foundation on which Hyperflex is built, and provides an easy, single point of management so resources can be easily adjusted to address the shifting needs of businesses.

…Storage. Cisco’s HyperFlex HX Data Platform software is a super high-performance file system that supports hypervisors (Virtual Machine Monitor (VMM)) with optimization and data management services.

…Networking. Cisco’s UCS provides a highly adaptive environment that offers easy integration with Cisco Application Centric Infrastructure (Cisco API), which is Cisco’s software-defined networking (SDN) solution that delivers hardware performance with software flexibility.

Call on the experts

To find out more about Cisco HyperFlex and what hyperconvergence can do for your organization, contact GDT’s hyperconvergence experts at SolutionsArchitects@gdt.com. They’d love to hear from you.

 

 

 

Why Companies are Turning to Mobility Managed Solutions (MMS)

By Richard Arneson

If mobility isn’t one of the most used words of the past ten (10) years, it’s got to be a close second. And mobility is no longer just about using Smart phones or tablets to purchase Christmas presents and avoid trips to the shopping mall. Mobility is transforming the way businesses operate, how their employees collaborate, and, ultimately, how it can generate more revenue. With the rapidly increasing implementation of BYOD (Bring Your Own Device), companies need to ensure that connectivity is fast, reliable and provides seamless, highly secure connectivity. And with the Internet of Things (IoT), companies can now offer customers immediate value and utilize advanced data analytics to better understand buyers’ tendencies and purchasing behaviors.

With so much at stake, it’s critical that companies carefully develop a mobility strategy that helps employees optimize their time and ultimately deliver bottom line results. Following are some of the many reasons why companies are turning to MMS providers to ensure they’ll get the most out of their mobility solutions.

Skillsets

Counting on your existing IT staff to have the necessary skillsets in place to create, then implement, a mobility strategy could end up costing your organization considerable time and money. Having them attempt to ramp up their mobility education is fine, but it lacks one key component―experience. You wouldn’t have a surgeon with no prior hands-on experience operate on you or a loved one. Why do the same with your company’s mobility strategy?

Resources

Lack of experience goes hand-in-hand with poor time management. In other words, the less experience, the longer it will take. And pulling existing IT staff off other important key initiatives could mean putting projects on hold, if not cancelling them altogether. And the time it takes to remediate events that have occurred due to the lack of empirical knowledge will only exacerbate the issue.

Security

With the ever-increasing demands for mobility solutions and applications, ensuring that company data is critically protected can’t be overlooked or handled piecemeal. Doing so will leave you in reactive, not proactive, security mode. Mobile security is being enhanced and improved on a regular basis, but without the needed expertise on staff, those security enhancements could fall on deaf ears. Also, an experienced Mobility Managed Solutions provider can help you set needed security policies and guidelines.

Maximizing Employee Productivity

One of the key reasons companies develop and enhance mobility solutions is to help ensure employee productivity is maximized. Not conducting fact-finding interviews with different departments to understand their existing and evolving demands will mean your mobility strategy is only partially baked. And trying to retro-fit solutions to address overlooked elements will result in additional time and unnecessary costs.

Monitoring

Mobility solutions aren’t a set-it-and-forget-it proposition. They must be managed, monitored and optimized on a regular basis. Updates need to maintained and administered. And as with any new technology roll-out, there will be confusion and consternation, so technical support needs to be prepped and ready before trouble tickets start rolling in.

Best Practices

There are a number of best practices that must be considered when developing and implementing mobility solutions. Are you in a heavily-regulated industry and, if so, does it adhere to industry-related mandates? Have mobile form factors and operating systems been taken into consideration? Will roll-out be conducted all at once or in a phased approach? If phased, have departmental needs been analyzed and prioritized? Have contingency plans been developed in the event roll-out doesn’t perfectly follow the script you’ve written?

Costs

Lacking the mobility experience and skillsets on staff could mean unnecessary costs are incurred. In fact, studies have shown that companies utilizing a MMS provider can save anywhere from 30 to 45% per device.

Experienced Expertise

Each of the aforementioned regarding mobility solutions are critically important, but all fall under one (1) primary umbrella―experience. You can read a book about how to drive a car, but it won’t do you much good unless you actually drive a car. It’s all about the experience, and mobility solutions are no different. Hoping you have the right skillsets on staff and hoping it will all work out are other ways of saying High Risk. Hope is not a good mobility solutions strategy.

If you have questions about your organization’s current mobility strategy, or you need to develop one, contact GDT’s Mobility Solutions experts at Mobility_Team@gdt.com. They’re comprised of experienced solutions architects and engineers who have implemented mobility solutions for some of the largest organizations in the world. They’d love to hear from you.

GDT hosts VMware NSX Workshop

 

On Thursday, June 28th, GDT hosted a VMware NSX workshop at GDT’s Innovation Campus. It was a comprehensive, fast-paced training course that focusds on installing, configuring, and managing VMware NSX™. It covered VMware NSX as a part of the software-defined data center platform, including functionality operating at Layers 2 through 7 of the OSI model. Hands-on lab activities were included to help support attendees’ understanding of VMware NSX features, functionality, and on-going management. Great event, as always!

 

Protection for your own backyard

By Richard Arneson

An 18-month-old study by the Ponemon Institute, an independent research and education organization that works to advance privacy management practices for businesses and government agencies, discovered that, even though malicious insiders compose the largest, most costly source of security breaches, over seventy-five percent (75%) of businesses largely remain unprotected from them. That’s astounding, especially considering the exponential growth of IoT and BYOD. Actually, though, that growth is part of the issue, and it’s due to two (2) issues—there is reduced visibility into these devices and security-related resources haven’t adjusted accordingly. Sure, everybody loves anytime, always-on connectivity, but without a secure Network Access Control (NAC) solution, you may need to include “…and anyone can hop on our network” after anytime and always-on.

Many organizations make their decision easier by selecting the same vendor they’re already using for their infrastructure. Or, worse, they’ve taken a We’ll get to that later, let’s first just worry about getting everybody connected approach. The former gives the illusion of security, even though it can be fraught with security gaps, but the latter doesn’t even suggest illusion, but delusion.

ClearPass – the secure gateway

Aruba, the 16-year-old, Santa Clara-based wireless networking company purchased by HPE three (3) years ago, developed ClearPass to provide NAC and cybersecurity policy management that discovers, profiles, authenticates and authorizes any device—IoT, BYOD, or otherwise–that needs to access customers’ networks. In addition, it can integrate with Aruba’s IntroSpect behavioral analytics solution, and it can be deployed in any network, regardless of vendor.

Whether networks are accessed through wireless, wired, or a VPN solution, ClearPass can meet those needs while providing real-time data that can be utilized to create policies to satisfy the most mobile of workforces.

ClearPass Guest

Designed to meet the needs of facility visitors, ClearPass Guest provides secure, automated guess access to accommodate wireless or wired networks, regardless of mobile device. Whether a self-registration or sponsor-involved option is selected, credentials and pre-authorized access privileges can be enforced for short- or long-term guests. Credentials can be delivered by text, email or printed badges, and can be set to automatically provide access for a specified amount of time.

ClearPass Onboard

Regardless of the mobile device used—Windows, IOS, Android, macOS, Chromebook, and others—ClearPass Onboard can automatically configure and provision them, and ensures they’re securely connected to the network. ClearPass Onboard is a perfect way to address BYOD security, allowing administrators to easily configure wireless, wired or VPN settings, and apply per device certificates and profiles to ensure users can securely connect to 801.11x-enabled networks. In addition, it greatly enhances the ability to troubleshoot device- and user-based policies. As a result, workflows are streamlined, which allows IT helpdesk personnel to better automate processes to alleviate IT burdens while also enhancing the user experience.

ClearPass QuickConnect

ClearPass QuickConnect is another great security solution for BYOD environments. It addresses one (1) of the most challenging and complicated aspects of remote access—configurations related to 801.11x access. A user-driven configuration wizard can be accessed from anywhere, which walks them through step-by-step procedures for configuring SSIDs or 802.11x settings, regardless of the device being used.

It’s no wonder they’re a leader in IoT and BYOD security

Enterprise-grade security, greater controls, a customized guest access portal, multi-vendor capabilities, automated device provisioning to address IoT and BYOD initiatives, industry-leading and first-to-market features, proofs of concept—these, and many others, are the reasons Aruba ClearPass delivers clear, unique and proven differentiators in the world of IT security.

Got questions on security related to IoT and BYOD? Call on the Security experts

To find out more about how to secure your IoT and BYOD initiatives, contact GDT’s tenured and talented security analysts at SOC@GDT.com. From their Security- and Network Operations Centers, they manage, monitor and protect the networks of some of the most notable enterprises, service providers, healthcare organizations and government agencies in the world. They’d love to hear from you.

 

Read more about network security here:

Gen V

Sexy, yes, but potentially dangerous

Tetration—you should know its meaning

It’s in their DNA

Rx for IT departments—a security check-up

When SOC plays second fiddle to NOC, you could be in for an expensive tune

How to protect against Ransomware

GDT Lunch & Learn on Agile IoT

On Tuesday, June 19th, GDT Associate Network Systems Engineer Andrew Johnson presented, as part of the GDT Agile Operations (DevOps) team’s weekly Lunch & Learn series, info about the wild world of IoT (Internet of Things). Andrew provides a high level overview of what IoT is and what can be done when all things are connected.  As more and more devices get connected, the ability to draw rich and varied information from the network is changing how companies, governments and individuals interact with the world. 

Why this market will grow 1200% by 2021!

According to an IDC report that was released in 2017, it was predicted the SD-WAN market would grow from a then $700M to over $8B by 2021. They’ve revised that figure. Now it’s over $9B.

SD-WAN is often, yet incorrectly, referred to as WAN Optimization, but that’s actually a perfect way to describe what SD-WAN delivers. The sundry WAN solutions of the past twenty-five (25) years―X.25, private lines (T1s/DS3s) and frame relay―gave way to Multi-Protocol Label Switching (MPLS) in the early 2000’s.

MPLS moved from frame relay’s Committed Information Rate (CIR)―a throughput guarantee―and offered Quality of Service (QOS), which allows customers to prioritize time sensitive traffic, such as voice and video. MPLS has been the primary means of WAN transport over the last fifteen (15) years, but SD-WAN provides enterprises and service providers tremendous benefits above and beyond MPLS, including the following:

Easier turn-up of new locations

With MPLS, as with any transport technology of the past, turning up a new site or upgrading an existing one is complex and time consuming. Each edge device must be configured separately, and the simplest of changes can take weeks. With SD-WAN, setting up a new location can be provisioned automatically, greatly reducing both time and complexity.

Virtual Path Control

SD-WAN software can direct traffic in a more intelligent, logical manner, and is also, like MPLS, capable of addressing QoS. SD-WAN can detect a path’s degradation and re-route sensitive traffic based on its findings. Also, having backup circuits stand by unused (and costing dollars, of course) is a thing of the past with SD-WAN.

Migration to Cloud-based Services

With traditional WAN architectures, traffic gets backhauled to a corporate or 3rd party data center, which is costly and reduces response times. SD-WAN allows traffic to be sent directly to a cloud services provider, such as AWS or Azure.

Security

SD-WAN provides a centralized means of managing security and policies, and utilizes standards-based encryption regardless of transport type. And once a device is authenticated, assigned policies are downloaded and cloud access is granted―quick, easy. Compare that to traditional WANs, where security is handled by edge devices and firewalls. Far more complex and costly.

…and last, but not least

SD-WAN can greatly reduce bandwidth costs, which are often the greatest expense IT organizations incur, especially if they’re connecting multiple locations. MPLS circuits are pricey, and SD-WAN can utilize higher bandwidth, lower cost options, such as broadband or DSL.

Does SD-WAN mark the end of MPLS?

Given the stringent QoS demands of some enterprise organizations, and the fear that SD-WAN won’t be able to accommodate them, it’s unlikely that SD-WAN will totally replace MPLS. And some organizations are simply averse to change, and/or fear their current IT staff doesn’t have the necessary skillsets to successfully migrate to SD-WAN, then properly monitor and manage it moving forward.

Call on the SD-WAN experts

To find out more about SD-WAN and the many benefits it can provide your organization, contact GDT’s tenured SD-WAN engineers and solutions architects at SDN@gdt.com. They’ve implemented SD-WAN solutions for some of the largest enterprise networks and service providers in the world. They’d love to hear from you.

Calculating the costs, hard and soft, of a cloud migration

When you consider the costs of doing business, you might only see dollar signs―not uncommon. But if your organization is planning a cloud migration, it’s important to understand all costs involved, both hard and soft. Sure, calculating the hard costs of a cloud migration is critically important―new or additional hardware and software, maintenance agreements, additional materials, etc.―but failing to consider and calculate soft costs could mean pointed questions from C-level executives will embarrassingly go answered. And not knowing both types of costs could result in IT projects and initiatives being delayed or cancelled—there’s certainly a cost from that.

When you’re analyzing the many critical cloud migration components―developing risk assessments, analyzing the effects on business units, applications and interoperability―utilize the following information to help you uncover all associated costs.

First, you’ll need a Benchmark

It’s important to first understand all costs associated with your current IT infrastructure. If you haven’t calculated that cost, you won’t have a benchmark against which you can evaluate and compare the cost of a cloud migration. Calculating direct costs, such as software and hardware, is relatively easy, but ensure that you’re including additional expenses, as well, such as maintenance agreements, licensing, warranties, even spare parts, if utilized. And don’t forget to include the cost of power, A/C and bandwidth. If you need to confirm cost calculations, talk with accounts payable―they’ll know.

Hard Costs of a Cloud Migration (before, during and after)

Before

Determining the hard costs related to cloud migrations includes any new or additional hardware required. That’s the easy part―calculating the monthly costs from cloud service providers is another issue. It has gotten easier, especially for Amazon Web Services (AWS) customers. AWS offers an online tool that calculate the Total Cost of Ownership (TCO) and Monthly Costs. But it’s still no picnic. Unless you have the cloud-related skillsets on staff, getting an accurate assessment of monthly costs might require you incur another, but worthwhile, hard cost―hiring a consultant who understands and can conduct a risk assessment prior to migration.

During

Cloud service providers charge customers a fee to transfer data from existing systems. And there might be additional costs in the event personnel is needed to ensure customers’ on-prem data is properly synced with data that has already been transferred. Ensuring this data integrity is important, but not easy, especially for an IT staff that is not experienced with prior cloud migrations.

After

Other than the monthly costs you’ll incur from your cloud provider of choice, such as AWS or Azure, consideration must be given to the ongoing maintenance costs of your new cloud environment. And while many of these are soft costs, there can be hard costs associated with them, as well, such as the ongoing testing of applications in the cloud.

The Hard-to-Calculate Soft Costs

If they’re not overlooked altogether, soft costs are seldom top-of-mind. Determining the value of your staff’s time isn’t hard to calculate (project hours multiplied by their hourly rate, which is calculated by dividing weekly pay by 40 (hours)), but locking down the amount of time a cloud migration has consumed isn’t easy. Now try calculating one that hasn’t taken place yet. There might be a cost in employee morale, as well, in the event the cloud migration doesn’t succeed or deliver as planned.

Consider the amount of time required to properly train staff and keep them cloud-educated into perpetuity―today’s cloud will look a lot different than future generations.

The testing and integrating of applications to be migrated takes considerable time, as well, and several factors must be considered, such as security, performance and scalability. Testing should also include potential risks that might result in downtime, and ensuring interoperability between servers, databases and the network.

Also, there’s a far greater than 0% chance your cloud migration won’t go exactly as planned, which will require additional man hours for proper remediation.

There are also soft costs associated with projects that are put on hold, especially if they delay revenue generation.

If questions exist, call on the experts

Here’s the great news―moving to the cloud, provided the migration is done carefully and comprehensively, will save considerable hard and soft costs now and in the future. Calculating the costs of a cloud migration is important, but not an easy or expeditious venture.

If you have questions about how to accurately predict the costs of a future cloud migration, contact GDT’s Cloud Experts at AWSTeam@gdt.com. They’d love to hear from you.

A Fiber Optic First

By Richard Arneson

It’s one of those “Do you remember where you were when…?” questions, at least for those at least fifty-years-old. And it didn’t just affect those in northern, hockey-friendly states. People as far south as Texas stopped their cars at the side of the road and began honking their car horns, then breaking into The Star-Spangled Banner and America the Beautiful while passing motorists sprayed them with wet grime. It was Friday, February 22nd when radios nationwide announced that the impossible had occurred at the 1980 Winter Olympics in Lake Placid, NY—the United States hockey team, comprised primarily of college-aged amateur athletes, had just defeated the Soviet Union Red Army team, considered by most familiar with the sport as the best hockey team of all time.

The closing seconds, announced worldwide by legendary sportscaster Al Michaels, became arguably the most well-known play-by-call in sports history:

11 seconds, you’ve got 10 seconds, the countdown going on right now! Morrow, up to Silk. Five seconds left in the game. Do you believe in miracles? YES!”

Legendary for several reasons

The game, which Sports Illustrated named the greatest sporting event in American history, is legendary for other reasons, as well. The TV broadcast of the game actually occurred later that evening during prime time on ABC, and was part of the first television transmission that utilized fiber optics. While it didn’t deliver the primary TV transmission, it was used to provide backup video feeds. Based on its success, it became the primary transmission vehicle four (4) years later at the 1984 Winter Olympics in Lilliehammer, Norway.

Why fiber optics will be around―forever

It’s no wonder fiber optics carries the vast majority of the world’s voice and data traffic. There was a time (late 1950’s) when it was believed satellite transmission would be the primary, if not exclusive, means for delivering worldwide communications. It wasn’t the Olympics, but a 1959 Christmastime speech by President Eisenhower to allay Americans’ Cold War fears that was the first delivered via satellite. But if you’re a user of satellite television, you’ve certainly experienced network downtime that comes with heavy cloud cover or rain.

And wireless communications, such as today’s 4G technology (5G will be commercially available in 2020), requires fiber optics to backhaul data from wireless towers back to network backbones, which is then delivered to its intended destination via…fiber optics.

The question regarding fiber optics has been debated for years: “Will any technology on the horizon replace the need for fiber optics?” Some technologists (although there appears to be few) say yes, but most say no―as in absolutely no. Line of sight wireless communications are an option, and have been around for years, but deploying them in the most populated areas of the country―cities―is impractical. If anything stands between communicating nodes, you’ll be bouncing your signal off a neighboring building. Not effective.

Facebook will begin trials in 2019 for Terragraph, a service they claim will replace fiber optics. Sure, it might in some places, such as neighborhoods, but is only capable of transmitting data to 100 ft. or less. It’s the next generation of 802.11, but, while it’s capable of transmitting data at speeds up to 30 Gbps, it’s no option for delivering 1’s and 0’s across oceans.

Fiber is fast, it’s durable, and it lasts a long time. Yep, fiber optics will be around for a while.

Did you know?

  • Fiber optics can almost travel at the speed of light, and isn’t affected by EMI (electromagnetic interference).
  • Without electricity coursing through it, fiber optics doesn’t create fire hazards. And add to that fact―it’s green, as in eco-friendly green. And it degrades far less quickly than its coax and copper counterparts.
  • Fiber is incredibly durable, and isn’t nearly as susceptible to breakage than copper wire or coaxial cable. Also, fiber has a service life of 25-35 years.
  • There’s less attenuation with fiber, meaning there’s a greatly reduced chance it will experience signal loss.
  • With Dense Wave Division Multiplexing (DWDM), the fiber’s light source can be divided into as many as eighty (80) wavelengths, with each carrying separate, simultaneous signals.

Call on the experts

If you have questions about how optical networking can help your organization get the most out of optical networking, contact The GDT Optical Transport Team at Optical@gdt.com. They’re comprised of highly experienced optical engineers and architects, and support some of the largest enterprise and service provider networks in the world.