Solutions Blog

Virtual Machine or Container…or Hypervisor? Read this, and you can make the call

By Richard Arneson

Containers have been around for years, but we’ll leave its history for another blog. Hypervisors, if you recall, are software that manage virtual machines (VMs), each of which can run its own programs but gives the appearance of running the host hardware’s memory, processor and resources. Hypervisors are, basically, a platform for VMs. But don’t be surprised to hear hypervisor and VM used interchangeably; they shouldn’t be, but it’s not uncommon. Just remember―hypervisors are the software that run VMs.

They’re both Abstractions, but at different layers

Hypervisors (VMs)―physical layer

Abstractions relate to something that’s pulled, or extracted, from something else. Hypervisors abstract physical resources, such as those listed above (memory, processor, and other resources), from the host hardware. And those physical resources can be abstracted for each of the virtual machines. The hypervisor abstracts the resources at a physical level, capable of, as an example, turning a single server into many, thus allowing for multiple VMs to run off a single machine. VMs run their own OS and applications, which can take up loads of resources, even boot up slowly.

Containers―application layer

Containers are, again, an abstraction, but pull from the application layer, packaging code and related dependencies into one (1) happy family. What’s another word for this packaging? Yep, containerization.

What are the benefits of containers over VMs?

Application Development

There are several benefits related to containers, but we’ll start with the differentiator that provides the biggest bang for the buck. Prior to containers, software couldn’t be counted on to reliably run when moved to different computing environments. Let’s say DevOps wants to move an application to a test environment. It might work fine, but it’s not uncommon for it to work―here’s a technical term―squirrelly. Maybe tests are conducted on Red Hat and production will be on, say, Debian. Or both locations have different versions of Python. Yep, squirrelly results.

In short, containers make it far easier for software developers by enabling them to know their creations will run, regardless of where they’ve been deployed.

Efficiency

Containers take up far less space than VMs, which, again, run their own OS. In addition, containers can handle more applications and require fewer VMs. Make no mistake, VMs are great, but when heavy scaling is required, you may find yourself dedicating resources that are, basically, managing a spate of operating systems.

And consider moving workloads between vendors with VMs. It’s not as simple as dragging an application from one OS to the other. A vSphere-based VM can’t have associated workloads moved to, say, Hyper-V.

Microservices

Microservices, which can run in containers, break down applications into smaller, bite-sized chunks. It allows different teams to easily work independently on different parts or aspects of an application. The result? Faster software development.

No, containers don’t mark the end of VMs and Hypervisors

In fact, containers and VMs don’t need to be mutually exclusive. VMs and containers can co-exist beautifully. As an example, a particular application may need to talk to a database on a VM. Containers can easily accommodate this particular scenario.

Sure, containers are efficient, self-contained systems that allow applications to run, regardless of where they’ve been deployed. But containers might not be the best option for all situations. And without expertise within IT departments to understand this difference, it will probably leave you wondering which―VMs or containers―will be the most beneficial to your organization. And, again, it might not be an either/or situation. For instance, as containers utilize one OS, it could, if you don’t have security expertise, leave you more open for security breaches than if utilizing VMs. Your best bet? Talk to experts like those as GDT.

Please, use your resources

You won’t find better networking resources than GDT’s talented solutions architects and engineers. They hold the highest technical certifications in the industry and have designed and implemented complex networking solutions for some of the largest enterprises and service providers in the world. They can be reached at SolutionsArchitects@gdt.com or at Engineering@gdt.com. They’d love to hear from you.

Shadow IT―you might be a participant and don’t even know it

By Richard Arneson

Everybody loves the cloud, and why wouldn’t they? The amount of innovation and productivity it has brought to businesses worldwide has been staggering. Where Salesforce once appeared to stand alone as the only cloud-based software service, it’s been joined over the past few years by thousands of applications that were once individually loaded on PCs (Office 365, the Adobe Creative Suite and WordPress come to mind). But with the good comes the bad―more accurately, the concerns―and, in the case of The Cloud, you can list issues related to security, governance and compliance as those that counterbalance the positive side of the Cloud ledger.

Shadow IT

Not to paint everybody with the same, broad brush stroke, but the preponderance of workers either have participated in Shadow IT, or continue to do so (it’s primarily the latter). Shadow IT refers to information technology that operates and is managed without the knowledge of the IT department―doesn’t sound very safe and secure, does it? Have you ever downloaded software that helps accomplish a task or goal without the knowledge of IT? Probably, right? That’s Shadow IT. But that’s not to say Shadow IT participants are operating with devious intentions; they do it for a variety of reasons, such as a need for expediency, or perhaps because corporate red tape, including required pre-requisites, preclude it. Participants’ goals―efficiency, productivity―may be noble and spot-on, but their actions can create a host of security headaches and issues at some point in the future. And there’s a very good chance it will. It’s estimated that within one (1) year, data breaches worldwide will cost organizations a collective $2.1 trillion. Oh, and the United States has the highest cost per breach ($7.9 million) in the world. Shadow IT helps buoy those numbers. Thinking a security issue only happens to the other guy is living in a fool’s paradise.

Cloud Access Security Brokers (CASB)

Sending out policies and conducting training for employees regarding computer and network use is great, and strongly encouraged, but counting on everybody adhering to these mandates is unreasonable and impractical, especially if your company has tens of thousands of workers scattered throughout the world.

To address the issue of Shadow IT, the industry has developed Cloud Access Security Brokers (no, they’re not people, but software), the name given by Gartner five (5) years ago that describes cloud security solutions centered around four (4) pillars: visibility, compliance, data security and threat protection. CASB is software planted between a company’s IT infrastructure and the cloud, and is now offered by several vendors, including Cisco―its CASB solution is called CloudLock (you can read about it here – Cisco CloudLock).

CASB utilizes an organization’s security policies to secure the flow of data to and from its IT infrastructure and the cloud. It encrypts data and protects it from malware attacks, provides encrypted data security, and helps defend protect against the scourge that is Shadow IT.

For more information…

With the help of its state-of-the-art Security Operations Center (SOC), GDT’s team of security professionals and analysts have been securing the networks of some of the most noteworthy enterprises and service providers in the world. They’re highly experienced at implementing, managing and monitoring Cisco security solutions. You can reach them at SOC@gdt.com. They’d love to hear from you.

The meaning of the word Tetration…and why you should learn it

By Richard Arneson

You won’t find its definition in Merriam-Webster, The Oxford English Dictionary or at Dictionary.com. But if you’re in the IT industry, it’s a term you’ve either heard or will be hearing a lot about soon. Why? Because Tetration is what Cisco has named its robust analytics platform. In case you’re wondering, tetration (the word, not Cisco’s platform) is the fourth order of iterated exponentiation (gulp), which, in short, means an ability to process huge volumes of data and, based on that, provide usable, meaningful results. Huge amounts of data delivering usable, meaningful results―yes, the word tetration perfectly describes Cisco’s analytics platform.

Addressing the limitations of perimeter-based security

Cisco Tetration comprehensively addresses a very complex environment―multi-cloud data centers and their respective applications’ workloads. Perimeter-based security falls short of protecting multi-cloud data centers and those applications. Tetration addresses just that, providing workload protection using zero-trust segmentation, which is an industry-wide security philosophy centered around the belief that nothing should be automatically trusted, and everything must be verified.

With Cisco Tetration, customers can identify security incidents faster, and, as a result, reduce their company’s attack surface. While being infrastructure-agnostic and capable of supporting on-premises and public cloud workloads, Tetration enables data center security to be adaptive, attainable and effective.

Tetration is part of Cisco’s portfolio of security products, the others being Application Centric Infrastructure (ACI), Stealthwatch and its Firepower Next Gen Firewalls.

How Whitelisting and Segmentation are addressed in Cisco Tetration

Whitelisting

Whitelisting refers to applications that have been approved (yes, it’s the opposite of blacklisting). Cisco Tetration automates whitelisting policies based on the dependency, communication and behavior of applications. It keeps an inventory of software packages (including associated versions) and baselines processes, after which it looks for any behavioral anomalies. Cisco Tetration constantly inventories the applications and maintains information about any exposures specifically related to them.

Segmentation

Once whitelisting policies have been automatically applied, those whitelisted applications are segmented across different domains, regardless of infrastructure type, such as on-prem or cloud-based. So if a cyber attacker has penetrated perimeter-based security, the segmenting of applications prevents lateral movement and communication once inside your network. Segmentation allows users to only access specific resources, which helps better detect suspicious behaviors or patterns. If there is a breach, segmentation limits its ill-effects to a local, much smaller subnet.

The Meaning of Cisco Tetration

While you won’t find Cisco Tetration in any of the aforementioned dictionaries, here’s a quick, bulleted summary about what it provides to customers:

  • Quick detection of suspicious application activities and anomalies
  • Dramatic reductions in attack surface
  • An automated zero-trust security model
  • Workload protection across on-prem and cloud data centers

Security Experts

GDT’s team of security professionals and analysts have been protecting, from their state-of-the-art Security Operations Center (SOC), the networks of some of the most notable enterprises and service providers in the world. You can reach them at SOC@gdt.com. They’d love to hear from you.

What exactly is a Network Appliance?

By Richard Arneson

We work in an industry rife with nomenclature issues. For instance, Hybrid IT is often used interchangeably with Hybrid Cloud―it shouldn’t, they’re different. They were even referred to as such in an “also known as” manner within a beautiful, 4-color brochure produced by one of the leading equipment vendors in the IT industry. I’ve seen hyperconverged substituted for converged, SAN confused with NAS, SDN and SD-WAN listed as equivalents. The list is seemingly endless.

The good news? Getting the answer is pretty easy, and only a few clicks away. Yes, Google is, for most, the answer to getting correct answers. Ask it a question, then read through the spate of corresponding articles from reputable sources, and you can generally deduce the right answer. When ninety-eight (38) answers say it’s A, and one (1) claims it’s B―it’s probably A.

When does “it” become an Appliance?

Sitting in a non-company presentation recently, I heard the word appliance used several times, and, even though I’ve been in the IT and telecommunications industry for years, I realized I didn’t technically know what appliance meant, how it was different than other networking equipment. I turned to the person seated at my left and asked, “What’s the difference between an appliance and a piece of networking equipment, be it a router, server, etc.?” The answer he provided offered little help. As an attempt to hide my dissatisfaction, I quietly whispered the same question to an engineer on my right. His answer could be only slightly construed as similar to the first response―slightly. In fact, the only true commonality between the answers came in the form of two (2) words―single function. Clear as Mississippi mud pie, right? During a break, I asked the question of several in attendance, and got answers that ran a mile wide and an inch deep, but provided, essentially, little information, possibly less than before.

I turned to Google, of course. But I discovered something I didn’t believe was possible―there was literally no definition or information I could find that even attempted to distinguish what, exactly, makes for a network appliance. According to “my history” in Google Chrome, I typed in over thirty (30) variations of the same question. Nothing. Frustrating. But I had something better than Google.

It works with governmental elections

GDT has over two-hundred (200) solutions architects and engineers, all talented and tenured, and have earned, collectively, well over one thousand (1,000) of the industry’s highest certifications. Why not poll some of the industry’s best and brightest with the question,” What differentiates an ‘appliance’ from other networking equipment?”

They weren’t allowed to reply “TO ALL” in the hopes that others’ answers wouldn’t influence theirs. Also, they couldn’t Google the question, or any derivative thereof, which, based on my experience, wouldn’t have helped anyway.

Drum roll, please

Responses came pouring in, even though it was after 5 PM on a Friday afternoon. So in lieu of posting well over one hundred (100) responses, I decided to craft, based on those responses (one was even a haiku), a definition of a network appliance related to how it’s differentiated from a non-appliance. Here goes…

A network appliance is different than a non-appliance because it comes pre-configured and is built with a specific purpose in mind.

And because I’m a fan of analogies, here’s one I received:

“You can make toast in the oven, but you’ve got a toaster, a device that is specifically made for making toast. Because it’s designed for a narrow problem set, the toaster is smaller than the oven, more energy efficient, easier to operate, and cheaper. An appliance is something that is able to be better than a general-purpose tool because it does less.”

And for you Haiku fans:

“It is a server

Or a virtual machine

That runs services”

There it is―a definition, an analogy, even a Haiku. Now don’t get me started on the word device.

Turn, like I did, to the experts

GDT’s team of solutions architects and engineers maintain the highest certification levels in the industry. They’ve crafted, installed and currently manage the networks and security needs of some of the largest enterprises and service providers in the world. They can be reached at SolutionsArchitects@gdt.com or at Engineering@gdt.com. Great folks; they’d love to hear from you.

Riding the Hyperconvergence Rails

By Richard Arneson

If your organization isn’t on, or planning to get on, the road to hyperconvergence (HCI), you may soon be left waiving at your competitors as the HCI train flies by. A recent industry study found that approximately 25% of companies currently use hyperconvergence, and another 23% plan on moving to it by the end of this year. And those percentages are considerably higher in certain sectors, such as healthcare and government. In addition to the many benefits HCI delivers—software-defined storage (SDS), an easier way to launch new cloud services, modernization of application development and deployment, and far more flexibility for data centers and infrastructures—it is currently providing customers, according to the study, an average of 25% in OPEX savings. It might be time to step up to the ticket window.

All Aboard!

If you haven’t heard about Dell EMC’s VxRail appliances, it’s time you do―they’ve been around for about two (2) years now. In that first year alone, they sold in excess of 8,000 nodes to well over 1,000 customers. And in May of this year, they announced a significant upgrade to their HCI portfolio with the launch of more robust VxRail appliances, including significant upgrades to VxRack, its Software-Defined Data Center (SDDC) system. VxRail was closely developed with VMware, of which Dell EMC owns eighty percent (80%).

The VxRail Portfolio of Appliances

All VxRail appliances listed below offer easy configuration flexibility, including future-proof capacity and performance with NVMe cache drives, 25GbE connectivity, and NVIDIA P40 GPUs (graphics processing units). They’re all built on Dell EMC’s latest PowerEdge servers, which are powered by Intel Xeon Scalable processors, and are available in all-flash or hybrid configurations.

G Series―the G in G-Series stands for general, as in general purpose appliance. It can handle up to four (4) nodes in a 2U chassis.

E Series―whether deployed in the data center or at the edge (hence the letter E), the E Series sleek, low-profile can fit into a 1U chassis.

V Series―the V stands for video; it is VDI-optimized graphics ready and can support up to three (3) graphics accelerators to support high-end 2D or 3D visualization. The V Series appliance provides one (1) node in its 2U profile.

P Series―P for performance. Each P Series appliance is optimized for the heaviest of workloads (think databases). Its 2U profile offers one (1) node per chassis.

S Series―Storage is the operative word here, and the S Series appliance is perfect for storage dense applications, such as Microsoft Exchange or Sharepoint. And if big data and analytics are on your radar screen, the S Series appliance is the right one for you. Like the P and V Series appliances, the S Series provides one (1) node in its 2U profile.

And to help you determine which VxRail appliance is right for your organization, Dell EMC offers a nifty, simple-to-use XRail Right Sizer Tool.

Perfect for VMware Customers

VMware customers are already familiar with the vCenter Server, which provides a centralized management platform to manage VMware environments. All VxRail appliances can be managed through it, so there’s no need to learn a new management system.

Questions about Hyperconvergence or VxRail?

For more information about what hyperconvergence, including what Dell EMC’s VxRail appliances can provide for your organization, contact GDT’s solutions architects and engineers at SolutionsArchitects@gdt.com. They hold the highest technical certification levels in the industry, and have designed and implemented hyperconverged solutions, including ones utilizing GDT partner Dell EMC’s products and services, for some of the largest enterprises and service providers in the world. They’d love to hear from you.

When good fiber goes bad

By Richard Arneson

Fiber optics brings to mind a number of things, all of them great: speed, reliability, high bandwidth, long distance transmission, immune to electromagnetic interference (EMI), and strength and durability. Fiber optics is comprised of fine glass, which might not sound durable, but flip the words fiber and glass and you’ve got a different story.

Fiberglass, as the name not so subtly suggests, is made up of glass fibers―at least partially. It achieves its incredible strength once it is combined with plastic. Originally used as insulation, the fiberglass train gained considerable steam in the 1970’s after asbestos, which had been widely used for insulation for over fifty (50) years, was found to cause cancer. But that’s enough about insulation.

How Fiber goes bad

As is often the case with good things, fiber optics doesn’t last forever. Or, it should be said, it doesn’t perform ideally forever. There are several issues that prevent it from delivering its intended goals.

Attenuation

Data transmission over fiber optics involves shooting light between input and output locations, and if the light intensity degrades, or loses its power, it’s known as attenuation. High attenuation is bad; low is good. There’s actually a mathematical equation that calculates the degree of attenuation, and this sum of all losses can be caused by a degradation in the fiber itself, poor splice points, or any point or junction where it’s connected.

Dispersion

When you shine a flashlight, the beam of light disperses over distance. This is dispersion. It’s expected, usually needed, when using a flashlight, but not your friend when it occurs in fiber optics. In fiber, dispersion occurs as a result of distance; the farther it’s transmitted, the weaker, or more degraded, the signal becomes. It must propagate enough light to achieve the bare minimum required by the receiving electronics.

Scattering

Signal loss or degradation can exist when there are microscopic variations in the fiber, which, well, scatters the light. Scattering can be caused by fluctuations in the fiber’s composition or density, and are most often due to issues in manufacturing.

Bending

When fiber optic cables are bent too much (and yes, there’s a mathematical formula for that), there can be a loss or degradation in data delivery. Bending can cause the light to be reflected at odd angles, and can be due to bending of the outer cladding (Macroscopic bending), or bending within it (Microscopic bending).

To the rescue―the Fiber Optic Characterization Study

Thankfully, determining the health of fiber optics doesn’t rely on a Plug it in and see if it works approach. It’s a good thing, considering there is an estimated 113,000 miles of fiber optic cable traversing the United States. And that number just represents “long haul” fiber, and doesn’t include fiber networks built within cities or metro areas.

Fiber Characterization studies determine the overall health of a fiber network. The study consists of a series of tests that ultimately determine if the fiber in question can deliver its targeted bandwidth. As part of the study, connectors are tested (which cause the vast majority of issues), and the types and degrees of signal loss are calculated, such as core asymmetry, polarization, insertion and optical return loss, backscattering, reflection and several types of dispersion.

As you probably guessed, Fiber Characterization studies aren’t conducted in-house, unless your house maintains the engineering skill sets and equipment to carry it out.

Questions about Fiber Characterization studies? Turn to the experts

Yes, fiber optics is glass, but that doesn’t mean it will last forever, even if it never tangles with its arch nemesis―the backhoe. If it’s buried underground, or is strung aerially, it does have a shelf life. And while its shelf life is far longer than its copper or coax counterparts, it will degrade, then fail, over time. Whether you’re a service provider or utilize your own enterprise fiber optic network, success relies on the three (3) D’s―dependable delivery of data. A Fiber Characterization Study will help you achieve those.

If you have questions about optical networking, including Fiber Characterization studies, contact The GDT Optical Transport Team at Optical@gdt.com. They’re highly experienced optical engineers and architects who support some of the largest enterprises and service providers in the world. They’d love to hear from you.

The Hyper in Hyperconvergence

By Richard Arneson

The word hyper probably brings to mind energy, and lots of it, possibly as it relates to a kid who paints on the dining room wall or breaks things, usually of value. But in the IT industry, hyper takes on an entirely different meaning, at least when combined with its compound counterpart―visor.

Hyperconvergence, in regards to data center infrastructures, is a step-up from convergence, and a stepping stone to composable. And, of course, convergence is an upgrade from traditional data center infrastructures, which are still widely used but eschew the use of, among other things, virtualization. Traditional data center infrastructures are heavily siloed, requiring separate skill sets in storage, networking, software, et al.

The Hypervisor―the engine that drives virtualization

Another compound word using hyper is what delivers the hyper in hyperconvergence ― hypervisor. In hyperconvergence, hypervisors manage virtual machines (VMs), each of which can run its own programs but gives the appearance of running the host hardware’s memory, processor and resources. The word hypervisor sounds like a tangible product, but it’s software, and is provided by, among others, market leaders VMware, Microsoft and Oracle. This hypervisor software is what allocates those resources, including memory and processor, to the VMs. Think of hypervisors as a platform for virtual machines.

Two (2) Types of Hypervisors

Hypervisors come in two (2) flavors, and deciding between either comes down to several issues, including compatibility with existing hardware, the level and type of management required, and performance that will satisfy your organization’s specific needs. Oh, and don’t forget budgetary considerations.

Bare-Metal – Type 1

Type 1 hypervisors are loaded directly onto hardware that doesn’t come pre-loaded with an Operating System. Type 1 hypervisors are the Operating System, and are more flexible, provide better performance and, as you may have guessed, are more expensive than their Type 2 counterparts. They’re usually single-purpose servers that become part of the resource pools that support multiple applications for virtual machines.

Hosted – Type 2

A Type 2 hypervisor runs as an application loaded in the Operating System already installed on the hardware. But because it’s loaded on top of the existing OS, it creates an additional layer of programming, or hardware abstraction, which is another way of saying less efficient.

So which Type will you need?

In the event you’re looking to move to a hyperconverged infrastructure, both the type of hypervisor, and from which partner’s products to choose, will generate a spate of elements to evaluate, such as the management tools you’ll need, which hypervisor will perform best based on your workloads, the level of scalability and availability you’ll require, and, of course, how much you’ll be able to afford.

It’s a big decision, so consulting with hyperconvergence experts should probably be your first order of business. The talented solutions architects and engineers at GDT have delivered hyperconvergence solutions to enterprises and service providers of all sizes. They’d love to hear from you, and can be reached at SolutionsArchitects@gdt.com.

How does IoT fit with SD-WAN?

By Richard Arneson

Now that computing has been truly pushed out to the edge, it brings up questions about how it will mesh with today’s networks. The answer? Very well, especially regarding SD-WAN.

IoT is comprised of three types of devices that make it work―sensors, gateways and the Cloud. No, smart phones aren’t one of the devices listed. In fact, and for simplicity’s sake, let’s not call smart phones devices. The technology sector is particularly adept at incorrectly utilizing words interchangeably. In this case, the confusing word is device. For instance, when you hear statistics about the estimated number of connected devices to be over 20 billion by 2020, smart phones are not part of that figure. While smart phones are often called devices and do have sensors that can detect tilt (gyroscope) and acceleration (accelerometer), IoT sensors extend beyond those devices (oops, I did it again; let’s call them pieces of equipment) that provide Internet connectivity―laptops, tablets and, yes, smart phones.

Sensors and Gateways and Clouds…oh my

Sensors are the edge devices, and can detect, among other things, temperature, pressure, water quality, existence of smoke or gas, et al. Think Ring Doorbell or Nest Thermostat.

The gateway can be either in hardware or software (sometimes both), and is used for the aggregation of connectivity, encryption and decryption of the IoT data.  Gateways translate protocols used in IoT sensors, including management, onboarding (storage and analytics) and edge computing. Gateways, as the name suggests, serve as a bridge between IoT devices, their associated protocols, such as Wi-Fi or Bluetooth, and the environment where the gathered data gets utilized.

SD-WAN and IoT

SD-WAN simplifies network management―period. And a subset of that simplicity comes in the form of visibility and predictability, which is exactly what IoT needs. SD-WAN can help ensure IoT devices in remote locations will get the bandwidth and security needed, which is especially important considering IoT devices don’t maintain a lot of computing power (for example, they usually don’t have enough to support Transport Layer Security (TLS)).

SD-WAN allows network managers the ability to segment traffic based on type―in this case, IoT―so device traffic can always be sent over the most optimal path. And SD-WAN traffic can be sent directly to a cloud services provider, such as AWS or Azure. In traditional architectures, such as MPLS, the traffic has to be backhauled to a data center, after which it is handed off to the Internet. Hello, latency―not good for IoT devices that need real-time access and updating.

SD-WAN is vendor-agnostic, and can run over virtually any existing topology, such as cellular, broadband and Wi-Fi, which makes it easier to connect devices in some of the more far-flung locations. And management can be accomplished through a central location, which makes it easier to integrate services across the IoT architecture of your choosing.

As mentioned earlier, there will be an estimated 20 billion IoT devices in use by 2020, up from 11 billion presently (by 2025…over 50 billion). The number of current endpoints being used is amazing, but the growth rate is truly staggering. And for IoT to deliver on its intended capabilities, it needs a network that can help it successfully deliver access to real-time data. That sounds like SD-WAN.

Here’s a great resource

To find out more about SD-WAN and exactly how it provides an ideal complement to IoT, contact GDT’s tenured SD-WAN engineers and solutions architects at SDN@gdt.com. They’ve implemented SD-WAN and IoT solutions for some of the largest enterprise networks and service providers in the world. They’d love to hear from you.

Unwrapping DevOps

By Richard Arneson

As the name suggests, DevOps is the shortened combination of two (2) words―development and operations. Originally, application development was time-consuming, fraught with errors and bugs, and, ultimately, resulted in the bane of the business world―slow to market.

Prior to DevOps, which addresses that slow to market issue, application developers worked in sequestered silos. They would collaborate with operations at a minimum, if at all. They’d gather requirements from operations, write huge chunks of code, then deliver their results weeks, maybe months, later.

They primary issue that can sabotage any relationship, whether personal or professional―is a lack of communication. Now sprinkle collaboration into the mix, and you have DevOps. It broke down communication and collaboration walls that still exist – if DevOps isn’t being utilized – between the two (2). The result? Faster time to market.

Off-Shoot of Agile Development

DevOps, which has been around for approximately ten (10) years, was borne out of Agile Development, created roughly ten (10) years prior to that. Agile Development is, simply, an approach to software development. Agile, as the name suggests, delivers the final project with more speed, or agility. It breaks down software development into smaller, more manageable chunks, and solicits feedback throughout the development process. As a result, application development became far more flexible and capable of responding to needs and changes much faster.

While many use Agile and DevOps interchangeably, they’re not the same

While Agile provides tremendous benefits as it relates to software development, it stops short of what DevOps provides. While DevOps can certainly utilize Agile methodologies, it doesn’t drop off the finished product, then quickly move on to the next one. Agile is a little like getting a custom-made device that solves some type of problem; DevOps will make the device, as well, but will also install it in the safest and most effective manner. In short, Agile is about developing applications―DevOps both develops and deploys it.

How does DevOps address Time to Market?

Prior to DevOps and Agile, application developers would deliver their release to operations, which would be responsible for testing the resultant software. And when testing isn’t conducted throughout the development process, operations is left with a very large application, often littered with issues and errors. Hundreds of thousands of lines of code that access multiple databases, networks and interfaces can require a tremendous amount of man hours to test, which in turn takes those man hours off other pressing projects―inefficient, wasteful. And often there was no single person or entity responsible for overseeing the entire project, and each department may have different success metrics. Going back to the relationship analogy, poor communication and collaboration means frustration and dissatisfaction for all parties involved. And with troubled relationships comes finger-pointing.

Automation

One of key elements of DevOps is its use of automation, which helps to deliver faster, more reliable deployments. Through the use of automation testing tools currently available, like Selenium, Test Studio and TestNG, to name a few, test cases can be constructed, then run while the application is being built. This reduces testing times exponentially and helps ensure each of the processes and features have been developed error free.

Automation is utilized for more than just testing, however. Workflows in development and deployment can be automated, enhancing collaboration and communication and, of course, shortening the delivery process. Production-ready environments that have already been tested can be continuously delivered. Real-time reporting can provide a window into any changes, or defects, that have taken place. And automated processes mean fewer mistakes due to human error.

Questions about what DevOps can deliver to your organization?

While DevOps isn’t a product, it’s certainly an integral component to consider when evaluating a Managed Services Provider (MSP). GDT’s DevOps professionals have time and again helped to provide and deploy customer solutions that have helped shorten their time to market and more rapidly enjoy positive business outcomes. For more information about DevOps and the many benefits it can provide to organizations of all sizes, contact GDT’s talented, tenured solutions architects at SolutionsArchitects@gdt.com. They’d love to hear from you.

How do you secure a Cloud?

By Richard Arneson

Every organization has, has plans to, or wants to move to The Cloud. And by 2020, most will be there. According to a recent survey, within two (2) years 83% of enterprise workloads will be in The Cloud―41% on public Clouds, like AWS and Microsoft Azure, 20% will be private-Cloud based, and 22% as part of a hybrid architecture. With the amount of traffic currently accessing The Cloud, and considering the aforementioned survey figures, security will continue to be at the forefront of IT departments’ collective minds―as well it should.

With organizations selectively determining what will run in The Cloud, security can prove challenging. Now throw in DevOps’ ability to build and test Cloud apps easier and faster, and you’ve amped those Cloud security concerns significantly.

Security Solutions geared for The Cloud

To address the spate of Cloud-related security concerns, Cisco built an extensive portfolio of solutions, listed below, to secure customers’ Cloud environments, whether public, private, or a combination of both (hybrid).

Cisco Cloudlock

The Cloudlock DLP (Data Loss Prevention) technology doesn’t rest; it continuously monitors Cloud environments to detect sensitive information, then protect it. Cloudlock controls Cloud apps that connect to customers’ networks, enforces data security, provides risk profiles and enforces security policies.

Cisco Email Security

Cisco Email Security protects Cloud-hosted email, protecting organizations from threats and phishing attacks in the GSuite and in Office 365.

Cisco Stealthwatch Cloud

Stealthwatch Cloud detects abnormal behavior and threats, then quickly quells it before it evolves into a disastrous breach.

Cisco Umbrella

Cisco Umbrella provides user protection regardless of the type, or location, of Internet access. It utilizes deep threat intelligence to provide a safety net—OK, an umbrella—for users by preventing them access to malicious, online destinations, and thwarts any suspect callback activities.

Cisco SaaS Cloud Security

If users are off-network, anti-virus software is often the only protection available. Cisco’s AMP (Advanced Malware Protection) for Endpoints prevents threats at their point of entry, and continuously tracks each and every file that accesses those endpoints. AMP can uncover the most advanced of threats, including ransomware and file-less malware.

Cisco Hybrid Cloud Workload Protection

Cisco Tetration, which is their proprietary analytics system, provides workload protection for MultiCloud environments and data centers. It uses zero-trust segmentation, which enables users to quickly identify security threats and reduce their attack surface (all endpoints where threats can gain entry). It supports on-prem and public Cloud workloads, and is infrastructure-agnostic.

Cisco’s Next-Gen Cloud Firewalls

Cisco’s VPN capabilities and virtual Next-Gen Firewalls provide flexible deployment options, so protection can be administered exactly where and when it’s needed, whether on-prem or in the Cloud.

For more information…

With the help of its state-of-the-art Security Operations Center (SOC), GDT’s team of security professionals and analysts have been securing the networks of some of the most noteworthy enterprises and service providers in the world. They’re highly experienced at implementing, managing and monitoring Cisco security solutions. You can reach them at SOC@gdt.com. They’d love to hear from you.