Solutions Blog

The meaning of the word Tetration…and why you should learn it

You won’t find its definition in Merriam-Webster, The Oxford English Dictionary or at But if you’re in the IT industry, it’s a term you’ve either heard or will be hearing a lot about soon. Why? Because Tetration is what Cisco has named its robust analytics platform. In case you’re wondering, tetration (the word, not Cisco’s platform) is the fourth order of iterated exponentiation (gulp), which, in short, means an ability to process huge volumes of data and, based on that, provide usable, meaningful results. Huge amounts of data delivering usable, meaningful results―yes, the word tetration perfectly describes Cisco’s analytics platform.

Addressing the limitations of perimeter-based security

Cisco Tetration comprehensively addresses a very complex environment―multi-cloud data centers and their respective applications’ workloads. Perimeter-based security falls short of protecting multi-cloud data centers and those applications. Tetration addresses just that, providing workload protection using zero-trust segmentation, which is an industry-wide security philosophy centered around the belief that nothing should be automatically trusted, and everything must be verified.

With Cisco Tetration, customers can identify security incidents faster, and, as a result, reduce their company’s attack surface. While being infrastructure-agnostic and capable of supporting on-premises and public cloud workloads, Tetration enables data center security to be adaptive, attainable and effective.

Tetration is part of Cisco’s portfolio of security products, the others being Application Centric Infrastructure (ACI), Stealthwatch and its Firepower Next Gen Firewalls.

How Whitelisting and Segmentation are addressed in Cisco Tetration


Whitelisting refers to applications that have been approved (yes, it’s the opposite of blacklisting). Cisco Tetration automates whitelisting policies based on the dependency, communication and behavior of applications. It keeps an inventory of software packages (including associated versions) and baselines processes, after which it looks for any behavioral anomalies. Cisco Tetration constantly inventories the applications and maintains information about any exposures specifically related to them.


Once whitelisting policies have been automatically applied, those whitelisted applications are segmented across different domains, regardless of infrastructure type, such as on-prem or cloud-based. So if a cyber attacker has penetrated perimeter-based security, the segmenting of applications prevents lateral movement and communication once inside your network. Segmentation allows users to only access specific resources, which helps better detect suspicious behaviors or patterns. If there is a breach, segmentation limits its ill-effects to a local, much smaller subnet.

The Meaning of Cisco Tetration

While you won’t find Cisco Tetration in any of the aforementioned dictionaries, here’s a quick, bulleted summary about what it provides to customers:

  • Quick detection of suspicious application activities and anomalies
  • Dramatic reductions in attack surface
  • An automated zero-trust security model
  • Workload protection across on-prem and cloud data centers

Security Experts

GDT’s team of security professionals and analysts have been protecting, from their state-of-the-art Security Operations Center (SOC), the networks of some of the most notable enterprises and service providers in the world. You can reach them at They’d love to hear from you.

What exactly is a Network Appliance?

We work in an industry rife with nomenclature issues. For instance, Hybrid IT is often used interchangeably with Hybrid Cloud―it shouldn’t, they’re different. They were even referred to as such in an “also known as” manner within a beautiful, 4-color brochure produced by one of the leading equipment vendors in the IT industry. I’ve seen hyperconverged substituted for converged, SAN confused with NAS, SDN and SD-WAN listed as equivalents. The list is seemingly endless.

The good news? Getting the answer is pretty easy, and only a few clicks away. Yes, Google is, for most, the answer to getting correct answers. Ask it a question, then read through the spate of corresponding articles from reputable sources, and you can generally deduce the right answer. When ninety-eight (38) answers say it’s A, and one (1) claims it’s B―it’s probably A.

When does “it” become an Appliance?

Sitting in a non-company presentation recently, I heard the word appliance used several times, and, even though I’ve been in the IT and telecommunications industry for years, I realized I didn’t technically know what appliance meant, how it was different than other networking equipment. I turned to the person seated at my left and asked, “What’s the difference between an appliance and a piece of networking equipment, be it a router, server, etc.?” The answer he provided offered little help. As an attempt to hide my dissatisfaction, I quietly whispered the same question to an engineer on my right. His answer could be only slightly construed as similar to the first response―slightly. In fact, the only true commonality between the answers came in the form of two (2) words―single function. Clear as Mississippi mud pie, right? During a break, I asked the question of several in attendance, and got answers that ran a mile wide and an inch deep, but provided, essentially, little information, possibly less than before.

I turned to Google, of course. But I discovered something I didn’t believe was possible―there was literally no definition or information I could find that even attempted to distinguish what, exactly, makes for a network appliance. According to “my history” in Google Chrome, I typed in over thirty (30) variations of the same question. Nothing. Frustrating. But I had something better than Google.

It works with governmental elections

GDT has over two-hundred (200) solutions architects and engineers, all talented and tenured, and have earned, collectively, well over one thousand (1,000) of the industry’s highest certifications. Why not poll some of the industry’s best and brightest with the question,” What differentiates an ‘appliance’ from other networking equipment?”

They weren’t allowed to reply “TO ALL” in the hopes that others’ answers wouldn’t influence theirs. Also, they couldn’t Google the question, or any derivative thereof, which, based on my experience, wouldn’t have helped anyway.

Drum roll, please

Responses came pouring in, even though it was after 5 PM on a Friday afternoon. So in lieu of posting well over one hundred (100) responses, I decided to craft, based on those responses (one was even a haiku), a definition of a network appliance related to how it’s differentiated from a non-appliance. Here goes…

A network appliance is different than a non-appliance because it comes pre-configured and is built with a specific purpose in mind.

And because I’m a fan of analogies, here’s one I received:

“You can make toast in the oven, but you’ve got a toaster, a device that is specifically made for making toast. Because it’s designed for a narrow problem set, the toaster is smaller than the oven, more energy efficient, easier to operate, and cheaper. An appliance is something that is able to be better than a general-purpose tool because it does less.”

And for you Haiku fans:

“It is a server

Or a virtual machine

That runs services”

There it is―a definition, an analogy, even a Haiku. Now don’t get me started on the word device.

Turn, like I did, to the experts

GDT’s team of solutions architects and engineers maintain the highest certification levels in the industry. They’ve crafted, installed and currently manage the networks and security needs of some of the largest enterprises and service providers in the world. They can be reached at or at Great folks; they’d love to hear from you.

Riding the Hyperconvergence Rails

If your organization isn’t on, or planning to get on, the road to hyperconvergence (HCI), you may soon be left waiving at your competitors as the HCI train flies by. A recent industry study found that approximately 25% of companies currently use hyperconvergence, and another 23% plan on moving to it by the end of this year. And those percentages are considerably higher in certain sectors, such as healthcare and government. In addition to the many benefits HCI delivers—software-defined storage (SDS), an easier way to launch new cloud services, modernization of application development and deployment, and far more flexibility for data centers and infrastructures—it is currently providing customers, according to the study, an average of 25% in OPEX savings. It might be time to step up to the ticket window.

All Aboard!

If you haven’t heard about Dell EMC’s VxRail appliances, it’s time you do―they’ve been around for about two (2) years now. In that first year alone, they sold in excess of 8,000 nodes to well over 1,000 customers. And in May of this year, they announced a significant upgrade to their HCI portfolio with the launch of more robust VxRail appliances, including significant upgrades to VxRack, its Software-Defined Data Center (SDDC) system. VxRail was closely developed with VMware, of which Dell EMC owns eighty percent (80%).

The VxRail Portfolio of Appliances

All VxRail appliances listed below offer easy configuration flexibility, including future-proof capacity and performance with NVMe cache drives, 25GbE connectivity, and NVIDIA P40 GPUs (graphics processing units). They’re all built on Dell EMC’s latest PowerEdge servers, which are powered by Intel Xeon Scalable processors, and are available in all-flash or hybrid configurations.

G Series―the G in G-Series stands for general, as in general purpose appliance. It can handle up to four (4) nodes in a 2U chassis.

E Series―whether deployed in the data center or at the edge (hence the letter E), the E Series sleek, low-profile can fit into a 1U chassis.

V Series―the V stands for video; it is VDI-optimized graphics ready and can support up to three (3) graphics accelerators to support high-end 2D or 3D visualization. The V Series appliance provides one (1) node in its 2U profile.

P Series―P for performance. Each P Series appliance is optimized for the heaviest of workloads (think databases). Its 2U profile offers one (1) node per chassis.

S Series―Storage is the operative word here, and the S Series appliance is perfect for storage dense applications, such as Microsoft Exchange or Sharepoint. And if big data and analytics are on your radar screen, the S Series appliance is the right one for you. Like the P and V Series appliances, the S Series provides one (1) node in its 2U profile.

And to help you determine which VxRail appliance is right for your organization, Dell EMC offers a nifty, simple-to-use XRail Right Sizer Tool.

Perfect for VMware Customers

VMware customers are already familiar with the vCenter Server, which provides a centralized management platform to manage VMware environments. All VxRail appliances can be managed through it, so there’s no need to learn a new management system.

Questions about Hyperconvergence or VxRail?

For more information about what hyperconvergence, including what Dell EMC’s VxRail appliances can provide for your organization, contact GDT’s solutions architects and engineers at They hold the highest technical certification levels in the industry, and have designed and implemented hyperconverged solutions, including ones utilizing GDT partner Dell EMC’s products and services, for some of the largest enterprises and service providers in the world. They’d love to hear from you.

When good fiber goes bad

Fiber optics brings to mind a number of things, all of them great: speed, reliability, high bandwidth, long distance transmission, immune to electromagnetic interference (EMI), and strength and durability. Fiber optics is comprised of fine glass, which might not sound durable, but flip the words fiber and glass and you’ve got a different story.

Fiberglass, as the name not so subtly suggests, is made up of glass fibers―at least partially. It achieves its incredible strength once it is combined with plastic. Originally used as insulation, the fiberglass train gained considerable steam in the 1970’s after asbestos, which had been widely used for insulation for over fifty (50) years, was found to cause cancer. But that’s enough about insulation.

How Fiber goes bad

As is often the case with good things, fiber optics doesn’t last forever. Or, it should be said, it doesn’t perform ideally forever. There are several issues that prevent it from delivering its intended goals.


Data transmission over fiber optics involves shooting light between input and output locations, and if the light intensity degrades, or loses its power, it’s known as attenuation. High attenuation is bad; low is good. There’s actually a mathematical equation that calculates the degree of attenuation, and this sum of all losses can be caused by a degradation in the fiber itself, poor splice points, or any point or junction where it’s connected.


When you shine a flashlight, the beam of light disperses over distance. This is dispersion. It’s expected, usually needed, when using a flashlight, but not your friend when it occurs in fiber optics. In fiber, dispersion occurs as a result of distance; the farther it’s transmitted, the weaker, or more degraded, the signal becomes. It must propagate enough light to achieve the bare minimum required by the receiving electronics.


Signal loss or degradation can exist when there are microscopic variations in the fiber, which, well, scatters the light. Scattering can be caused by fluctuations in the fiber’s composition or density, and are most often due to issues in manufacturing.


When fiber optic cables are bent too much (and yes, there’s a mathematical formula for that), there can be a loss or degradation in data delivery. Bending can cause the light to be reflected at odd angles, and can be due to bending of the outer cladding (Macroscopic bending), or bending within it (Microscopic bending).

To the rescue―the Fiber Optic Characterization Study

Thankfully, determining the health of fiber optics doesn’t rely on a Plug it in and see if it works approach. It’s a good thing, considering there is an estimated 113,000 miles of fiber optic cable traversing the United States. And that number just represents “long haul” fiber, and doesn’t include fiber networks built within cities or metro areas.

Fiber Characterization studies determine the overall health of a fiber network. The study consists of a series of tests that ultimately determine if the fiber in question can deliver its targeted bandwidth. As part of the study, connectors are tested (which cause the vast majority of issues), and the types and degrees of signal loss are calculated, such as core asymmetry, polarization, insertion and optical return loss, backscattering, reflection and several types of dispersion.

As you probably guessed, Fiber Characterization studies aren’t conducted in-house, unless your house maintains the engineering skill sets and equipment to carry it out.

Questions about Fiber Characterization studies? Turn to the experts

Yes, fiber optics is glass, but that doesn’t mean it will last forever, even if it never tangles with its arch nemesis―the backhoe. If it’s buried underground, or is strung aerially, it does have a shelf life. And while its shelf life is far longer than its copper or coax counterparts, it will degrade, then fail, over time. Whether you’re a service provider or utilize your own enterprise fiber optic network, success relies on the three (3) D’s―dependable delivery of data. A Fiber Characterization Study will help you achieve those.

If you have questions about optical networking, including Fiber Characterization studies, contact The GDT Optical Transport Team at They’re highly experienced optical engineers and architects who support some of the largest enterprises and service providers in the world. They’d love to hear from you.




The Hyper in Hyperconvergence

The word hyper probably brings to mind energy, and lots of it, possibly as it relates to a kid who paints on the dining room wall or breaks things, usually of value. But in the IT industry, hyper takes on an entirely different meaning, at least when combined with its compound counterpart―visor.

Hyperconvergence, in regards to data center infrastructures, is a step-up from convergence, and a stepping stone to composable. And, of course, convergence is an upgrade from traditional data center infrastructures, which are still widely used but eschew the use of, among other things, virtualization. Traditional data center infrastructures are heavily siloed, requiring separate skill sets in storage, networking, software, et al.

The Hypervisor―the engine that drives virtualization

Another compound word using hyper is what delivers the hyper in hyperconvergence ― hypervisor. In hyperconvergence, hypervisors manage virtual machines (VMs), each of which can run its own programs but gives the appearance of running the host hardware’s memory, processor and resources. The word hypervisor sounds like a tangible product, but it’s software, and is provided by, among others, market leaders VMware, Microsoft and Oracle. This hypervisor software is what allocates those resources, including memory and processor, to the VMs. Think of hypervisors as a platform for virtual machines.

Two (2) Types of Hypervisors

Hypervisors come in two (2) flavors, and deciding between either comes down to several issues, including compatibility with existing hardware, the level and type of management required, and performance that will satisfy your organization’s specific needs. Oh, and don’t forget budgetary considerations.

Bare-Metal – Type 1

Type 1 hypervisors are loaded directly onto hardware that doesn’t come pre-loaded with an Operating System. Type 1 hypervisors are the Operating System, and are more flexible, provide better performance and, as you may have guessed, are more expensive than their Type 2 counterparts. They’re usually single-purpose servers that become part of the resource pools that support multiple applications for virtual machines.

Hosted – Type 2

A Type 2 hypervisor runs as an application loaded in the Operating System already installed on the hardware. But because it’s loaded on top of the existing OS, it creates an additional layer of programming, or hardware abstraction, which is another way of saying less efficient.

So which Type will you need?

In the event you’re looking to move to a hyperconverged infrastructure, both the type of hypervisor, and from which partner’s products to choose, will generate a spate of elements to evaluate, such as the management tools you’ll need, which hypervisor will perform best based on your workloads, the level of scalability and availability you’ll require, and, of course, how much you’ll be able to afford.

It’s a big decision, so consulting with hyperconvergence experts should probably be your first order of business. The talented solutions architects and engineers at GDT have delivered hyperconvergence solutions to enterprises and service providers of all sizes. They’d love to hear from you, and can be reached at

How does IoT fit with SD-WAN?

Now that computing has been truly pushed out to the edge, it brings up questions about how it will mesh with today’s networks. The answer? Very well, especially regarding SD-WAN.

IoT is comprised of three types of devices that make it work―sensors, gateways and the Cloud. No, smart phones aren’t one of the devices listed. In fact, and for simplicity’s sake, let’s not call smart phones devices. The technology sector is particularly adept at incorrectly utilizing words interchangeably. In this case, the confusing word is device. For instance, when you hear statistics about the estimated number of connected devices to be over 20 billion by 2020, smart phones are not part of that figure. While smart phones are often called devices and do have sensors that can detect tilt (gyroscope) and acceleration (accelerometer), IoT sensors extend beyond those devices (oops, I did it again; let’s call them pieces of equipment) that provide Internet connectivity―laptops, tablets and, yes, smart phones.

Sensors and Gateways and Clouds…oh my

Sensors are the edge devices, and can detect, among other things, temperature, pressure, water quality, existence of smoke or gas, et al. Think Ring Doorbell or Nest Thermostat.

The gateway can be either in hardware or software (sometimes both), and is used for the aggregation of connectivity, encryption and decryption of the IoT data.  Gateways translate protocols used in IoT sensors, including management, onboarding (storage and analytics) and edge computing. Gateways, as the name suggests, serve as a bridge between IoT devices, their associated protocols, such as Wi-Fi or Bluetooth, and the environment where the gathered data gets utilized.

SD-WAN and IoT

SD-WAN simplifies network management―period. And a subset of that simplicity comes in the form of visibility and predictability, which is exactly what IoT needs. SD-WAN can help ensure IoT devices in remote locations will get the bandwidth and security needed, which is especially important considering IoT devices don’t maintain a lot of computing power (for example, they usually don’t have enough to support Transport Layer Security (TLS)).

SD-WAN allows network managers the ability to segment traffic based on type―in this case, IoT―so device traffic can always be sent over the most optimal path. And SD-WAN traffic can be sent directly to a cloud services provider, such as AWS or Azure. In traditional architectures, such as MPLS, the traffic has to be backhauled to a data center, after which it is handed off to the Internet. Hello, latency―not good for IoT devices that need real-time access and updating.

SD-WAN is vendor-agnostic, and can run over virtually any existing topology, such as cellular, broadband and Wi-Fi, which makes it easier to connect devices in some of the more far-flung locations. And management can be accomplished through a central location, which makes it easier to integrate services across the IoT architecture of your choosing.

As mentioned earlier, there will be an estimated 20 billion IoT devices in use by 2020, up from 11 billion presently (by 2025…over 50 billion). The number of current endpoints being used is amazing, but the growth rate is truly staggering. And for IoT to deliver on its intended capabilities, it needs a network that can help it successfully deliver access to real-time data. That sounds like SD-WAN.

Here’s a great resource

To find out more about SD-WAN and exactly how it provides an ideal complement to IoT, contact GDT’s tenured SD-WAN engineers and solutions architects at They’ve implemented SD-WAN and IoT solutions for some of the largest enterprise networks and service providers in the world. They’d love to hear from you.

Unwrapping DevOps

As the name suggests, DevOps is the shortened combination of two (2) words―development and operations. Originally, application development was time-consuming, fraught with errors and bugs, and, ultimately, resulted in the bane of the business world―slow to market.

Prior to DevOps, which addresses that slow to market issue, application developers worked in sequestered silos. They would collaborate with operations at a minimum, if at all. They’d gather requirements from operations, write huge chunks of code, then deliver their results weeks, maybe months, later.

They primary issue that can sabotage any relationship, whether personal or professional―is a lack of communication. Now sprinkle collaboration into the mix, and you have DevOps. It broke down communication and collaboration walls that still exist – if DevOps isn’t being utilized – between the two (2). The result? Faster time to market.

Off-Shoot of Agile Development

DevOps, which has been around for approximately ten (10) years, was borne out of Agile Development, created roughly ten (10) years prior to that. Agile Development is, simply, an approach to software development. Agile, as the name suggests, delivers the final project with more speed, or agility. It breaks down software development into smaller, more manageable chunks, and solicits feedback throughout the development process. As a result, application development became far more flexible and capable of responding to needs and changes much faster.

While many use Agile and DevOps interchangeably, they’re not the same

While Agile provides tremendous benefits as it relates to software development, it stops short of what DevOps provides. While DevOps can certainly utilize Agile methodologies, it doesn’t drop off the finished product, then quickly move on to the next one. Agile is a little like getting a custom-made device that solves some type of problem; DevOps will make the device, as well, but will also install it in the safest and most effective manner. In short, Agile is about developing applications―DevOps both develops and deploys it.

How does DevOps address Time to Market?

Prior to DevOps and Agile, application developers would deliver their release to operations, which would be responsible for testing the resultant software. And when testing isn’t conducted throughout the development process, operations is left with a very large application, often littered with issues and errors. Hundreds of thousands of lines of code that access multiple databases, networks and interfaces can require a tremendous amount of man hours to test, which in turn takes those man hours off other pressing projects―inefficient, wasteful. And often there was no single person or entity responsible for overseeing the entire project, and each department may have different success metrics. Going back to the relationship analogy, poor communication and collaboration means frustration and dissatisfaction for all parties involved. And with troubled relationships comes finger-pointing.


One of key elements of DevOps is its use of automation, which helps to deliver faster, more reliable deployments. Through the use of automation testing tools currently available, like Selenium, Test Studio and TestNG, to name a few, test cases can be constructed, then run while the application is being built. This reduces testing times exponentially and helps ensure each of the processes and features have been developed error free.

Automation is utilized for more than just testing, however. Workflows in development and deployment can be automated, enhancing collaboration and communication and, of course, shortening the delivery process. Production-ready environments that have already been tested can be continuously delivered. Real-time reporting can provide a window into any changes, or defects, that have taken place. And automated processes mean fewer mistakes due to human error.

Questions about what DevOps can deliver to your organization?

While DevOps isn’t a product, it’s certainly an integral component to consider when evaluating a Managed Services Provider (MSP). GDT’s DevOps professionals have time and again helped to provide and deploy customer solutions that have helped shorten the time to market they’ve needed to enjoy positive business outcomes. For more information about DevOps and the many benefits it can provide to organizations of all sizes, contact GDT’s talented, tenured solutions architects at They’d love to hear from you.

How do you secure a Cloud?

Every organization has, has plans to, or wants to move to The Cloud. And by 2020, most will be there. According to a recent survey, within two (2) years 83% of enterprise workloads will be in The Cloud―41% on public Clouds, like AWS and Microsoft Azure, 20% will be private-Cloud based, and 22% as part of a hybrid architecture. With the amount of traffic currently accessing The Cloud, and considering the aforementioned survey figures, security will continue to be at the forefront of IT departments’ collective minds―as well it should.

With organizations selectively determining what will run in The Cloud, security can prove challenging. Now throw in DevOps’ ability to build and test Cloud apps easier and faster, and you’ve amped those Cloud security concerns significantly.

Security Solutions geared for The Cloud

To address the spate of Cloud-related security concerns, Cisco built an extensive portfolio of solutions, listed below, to secure customers’ Cloud environments, whether public, private, or a combination of both (hybrid).

Cisco Cloudlock

The Cloudlock DLP (Data Loss Prevention) technology doesn’t rest; it continuously monitors Cloud environments to detect sensitive information, then protect it. Cloudlock controls Cloud apps that connect to customers’ networks, enforces data security, provides risk profiles and enforces security policies.

Cisco Email Security

Cisco Email Security protects Cloud-hosted email, protecting organizations from threats and phishing attacks in the GSuite and in Office 365.

Cisco Stealthwatch Cloud

Stealthwatch Cloud detects abnormal behavior and threats, then quickly quells it before it evolves into a disastrous breach.

Cisco Umbrella

Cisco Umbrella provides user protection regardless of the type, or location, of Internet access. It utilizes deep threat intelligence to provide a safety net—OK, an umbrella—for users by preventing them access to malicious, online destinations, and thwarts any suspect callback activities.

Cisco SaaS Cloud Security

If users are off-network, anti-virus software is often the only protection available. Cisco’s AMP (Advanced Malware Protection) for Endpoints prevents threats at their point of entry, and continuously tracks each and every file that accesses those endpoints. AMP can uncover the most advanced of threats, including ransomware and file-less malware.

Cisco Hybrid Cloud Workload Protection

Cisco Tetration, which is their proprietary analytics system, provides workload protection for MultiCloud environments and data centers. It uses zero-trust segmentation, which enables users to quickly identify security threats and reduce their attack surface (all endpoints where threats can gain entry). It supports on-prem and public Cloud workloads, and is infrastructure-agnostic.

Cisco’s Next-Gen Cloud Firewalls

Cisco’s VPN capabilities and virtual Next-Gen Firewalls provide flexible deployment options, so protection can be administered exactly where and when it’s needed, whether on-prem or in the Cloud.

For more information…

With the help of its state-of-the-art Security Operations Center (SOC), GDT’s team of security professionals and analysts have been securing the networks of some of the most noteworthy enterprises and service providers in the world. They’re highly experienced at implementing, managing and monitoring Cisco security solutions. You can reach them at They’d love to hear from you.


Flash, yes, but is it storage or memory?

We’ve all been pretty well trained to believe that, at least in the IT industry, anything defined or labeled as “flash” is a good thing. It conjures up thoughts of speed (“in a flash”), which is certainly one of most operative words in the industry―everybody wants “it” done faster. But the difference between flash memory and flash storage is often confused, as both not only store information, but are both referred to as Solid State Storage. For instance, a thumb drive utilizes flash memory, but is considered a storage device, right? And both are considered solid state storage devices, which means neither is mechanical, but electronic. Mechanical means moving parts, and moving parts means prone to failure from drops, bumps, shakes or rattles.

Flash Memory―short-term storage

Before getting into flash memory, just a quick refresher on what memory accomplishes. Memory can be viewed as short-term data storage, maintaining information that a piece of hardware is actively using. The more applications you’re running, the more memory is needed. It’s like a workbench, of sorts, and the larger its surface area, the more projects you can be working on at one time. When you’re done with a project, you can store it long-term (data storage), where it’s easily retrieved when needed.

Flash memory accomplishes its tasks in a non-volatile manner, meaning it doesn’t require power to function. It’s quickly accessible, smaller in size, and more durable than volatile memory, such as RAM (Random Access Memory), which requires the device to be powered on to access. And once it’s turned off, data in RAM is gone.

Flash Storage―storage for the long term

Much like a combustion engine, flash storage, the engine, needs flash memory, the fuel, to run. It’s nonvolatile (doesn’t require power), and utilizes one of two (2) types of flash memory―NAND or NOR.

NAND flash memory writes and reads data in blocks, while NOR does it in independent bytes. NOR flash is faster and more expensive, and better for processing small amounts of code―it’s often used in mobile phones. NAND flash is generally used for devices that need to upload and/or replace large files, such as photos, music or videos.

Confusion between flash storage and flash memory might be non-existent for some, maybe even most, but it’s astounding how much information either confuses the two (2) or does a poor job differentiating them.

Contact the Flash experts

For more information about flash storage, including all-flash arrays, which contain many flash memory drives and are ideal for large enterprise and data center solutions, contact the talented, tenured solutions architects and engineers at GDT. They’re experienced at designing and implementing storage solutions, whether on-prem or in the cloud, for enterprises of all sizes. You can reach them at

When considering an MSP, don’t forget these letters: ITSM and ITIL

It’s not hard to find a Managed Services Provider (MSP); the hard part is finding the right one. Of course, there are many, many things to consider when evaluating MSPs, including the quality of its NOC and SOC (don’t forget the all-important SOC), the level of experienced professionals who manage and maintain it on a second-by-second basis, the length of time they’ve been providing managed services, the breadth and depth of their knowledge, and the range of customer sizes and industries they serve. But there’s something else that should be considered, and asked about, if you’re evaluating Managed Services Providers (MSP)―whether they utilize ITSM and ITIL methodologies.

ITSM (Information Technology Service Management)

ITSM is an approach for the design, delivery, management and overall improvement of an organization’s IT services. Quality ITSM delivers the right people, technology, processes and toolsets to address business objectives. If you currently manage IT services for your organization, you have, whether you know it or not, an ITSM strategy. Chances are that if you don’t know you have one, it might not be very effective, which could be one (1) of the reasons you’re evaluating MSPs.

Ensure the MSPs you’re evaluating staff their NOC and SOC with professionals who adhere to ITSM methodologies. If an ITSM is poorly constructed and doesn’t align with your company’s goals, it will negatively reflect on whether ITIL best practices can be achieved.

ITIL (Information Technology Infrastructure Library)

ITIL is a best practices framework that helps align IT with business needs. It outlines complete guidelines for five (5) key IT lifecycle service areas: Service Design, Service Strategy, Service Transition, Service Operations, and Continued Service Improvement. ITIL’s current version is 3 (V.3), so not only ensuring they follow ITIL methodologies is important, but make certain they’re well-versed in ITIL V.3., which addresses twenty-eight (28) different business processes that affect a company’s ITSM.

Here’s the difference in ITSM and ITIL that you need to remember

ITSM is how IT services are managed. ITIL is a best practices framework for ITSM. So, put simply, ITSM is what is what you do, and ITIL is how to do it. ITIL helps make sense of ITSM processes. ITIL isn’t the only certification of its type in the IT industry, but is undoubtedly the most widely used.

Without understanding the relationship between ITSM and ITIL, companies won’t gain business agility, operational transparency, and reductions in downtime and costs. And if your MSP doesn’t understand that relationship, they’re far less likely to deliver the aforementioned benefits.

For more info, turn to Managed Services Experts

Selecting an MSP is a big decision. Turning over the management of your network and security can be a make-or-break decision. Ensuring that they closely follow ITSM and ITIL methodologies is critically important.

For more information about ITSM and ITIL, contact the Managed Services professionals at GDT. They manage networks and security for some of the largest companies and service providers in the world from their state-of-the-art, 24x7x365 NOC and SOC. You can reach them at