Solutions Blog

Enough with the Aggie jokes—Texas A&M’s new initiative to combat cyber threats is nothing to laugh about

By Richard Arneson

Some things just don’t make sense, like why when a baseball hits the foul pole it’s a fair ball. Shouldn’t it be called the fair pole? Or why hot dogs come in packs of ten (10) but the buns in quantities of eight (8). Oh, and how about this one—its estimated that within the next three (3) years almost 4 million (4,000,000) cybersecurity jobs will go unfilled due to both a lack of interest and adequate training. It doesn’t seem possible given the amount of cybersecurity events that we hear about every week, what with the ransomware, the Trojans, the viruses, the malware, etc. You’d think cybersecurity would be attracting professionals in droves, but it isn’t. Texas A&M University is doing something about it, though.

While many of the larger corporations have enacted specialized apprenticeship programs in cybersecurity, including mobile training trucks for personnel, the Fightin’ Texas Aggies have taken a far more proactive approach to this issue, and it’s one from which they’re immediately benefiting. To address their cybersecurity labor shortage, they’re pairing students with AI software to protect the schools’ systems from cyber-attacks. In turn, the students get security training and a great, hands-on addition to the resume.

Each month, the Texas A&M University System, which includes eleven (11) universities and seven (7) state agencies, estimates that there are approximately a million attempts to hack into their systems. Prior to implementing this program, IT security was handled by a lean staff that included few full-time employees. Now ten (10) students comprise the majority of their IT security team, and they’re utilizing AI software to detect, monitor and remedy threats. And they’re having no trouble filling these positions. Word has spread throughout campus that this high-visibility program provides insightful skill sets and extremely marketable training.

Nothing beats on-the-job experience

The students’ first order of business each day is to study a whiteboard that outlines areas within the university system that have, or are currently facing, a threat. The threats are compiled through AI, which also prioritizes each. Then it’s up to the students to analyze any abnormalities and determine if it appears suspicious by comparing them to prior attacks.

AI software is key to this initiative, serving as a great springboard for inexperienced cybersecurity students by allowing them to evaluate threats immediately. While AI isn’t acting on the threats—which some consider a risky proposition in the first place–it’s left up to the students to remediate the issues.

So why the lack of professionals in cybersecurity?

Almost fifty percent (50%) of security executives recently surveyed by ISSA (Information Systems Security Association) said that this glaring lack of security professionals is due to two (2) things—high turnover and the high rate of job burnout. And while Texas A&M’s SOC (Security Operations Center) isn’t immune to either, they’re attempting to address these issues by throwing numbers at the issue in the form of many students who are looking for an opportunity to work there. And due to these numbers, students are able to spend time training or working on side projects that can be great additions to their resume. Gig ’em.

Got question? Call on the Security experts

To find out more about cybersecurity and the threats that may soon target your organization, contact GDT’s tenured and talented security analysts at From their Security- and Network Operations Centers, they manage, monitor and protect the networks of some of the most notable enterprises, service providers, healthcare organizations and government agencies in the world. They’d love to hear from you.


Read more about network security here:

Gen V

Sexy, yes, but potentially dangerous

Tetration—do you know its meaning?

It’s in their DNA

Rx for IT departments—a security check-up

When SOC plays second fiddle to NOC, you could be in for an expensive tune

How to protect against Ransomware

IT Staff Augmentation—it’s about more than just the resume

By Richard Arneson

Whether you call it IT staff augmentation or IT project outsourcing (they’re actually different in the event you’d like to read about it), there’s no question that the IT industry has adopted both in force. In the IT industry alone, the number of contracted, external technical professionals has grown by approximately seventy-five percent (75%) in the past ten (10) years. And those percentages are only climbing.

In some professions (sales being the first that comes to mind), the soft skills listed below are usually given more consideration by staffing solutions firms. As it relates to IT, however, companies looking to augment their organizations are hyper-focused―as often are the staffing firms―on particular technical skill sets, specific certifications, and experience working on initiatives that perfectly align with theirs.

Most staffing firms are eager to tell their clients about the candidate they’ve found whose resume and LinkedIn profile perfectly aligns with their needs. But being focused on technical skills, certifications and experience often means they’ve ignored the soft skills, which can easily make or break a project.

The soft skills an IT staffing firm needs to consider

Before you select a staffing solutions firm, you need to ask them exactly what, outside of the job requirements and experience that’s required, they consider prior to recommending they interview a candidate. If they don’t mention that they screen for most of the following soft skills, you might end up bringing somebody into your organization who, while they might have the right resume, will end up being a disastrous hiring decision. And for each of the soft skills they mention, have them provide details about how they gauge candidates’ abilities to address them.

Work Ethic

Working and collaborating with a fellow employee who possesses a poor work ethic will drain the lifeforce out of any organization—and that’s if they’ve been there a while. Now bring in somebody from the outside with the same issue and you see and feel your staff’s animosity spread like a drought-fueled wildfire.

Problem Solving

Everybody needs some onboarding time, but if that honeymoon period lasts for an extended period, feelings of not getting your money’s worth won’t will soon follow. If you want to augment your staff for six (6) months or less, you may end up paying for several weeks that are not only unproductive, but will take the time spent training the contracted employee away from key projects or initiatives.

Time Management and Project Management

The best of intentions can be quickly and easily de-railed if a candidate doesn’t effectively manage their day and schedule. Consider these stats: Ten (10) minutes spent planning your day can save at least two (2) hours of wasted time each DAY (not week or month…each day). And only twenty percent (20%) of each day is spent on important or crucial tasks. The other eighty percent (80%) is spent toiling on issues that have little or no value. A candidate’s poor time management skills could mean you could get about twenty cents (20¢) worth of work for each dollar spent.


People in the IT industry like to crack wise about engineers’ and programmers’ conspicuous lack of personality. Remember, though, they’re being inserted into an existing organization that has its own personality (every department and company has one). But while the candidate needs to mesh with the personality of the IT department, their communications skills can’t be ignored, both verbal and written. Without those, meshing with the department in which they’ll be working will come much more slowly, if at all. And if they’ll be interacting with your customers, you may be sorry that the staffing firm didn’t make this soft skill a priority.

Listening Skills

Regardless of the industry or profession, poor listening skills can sour a relationship faster than anything—anything. Now put a new employee with poor listening skills into an established team or organization, and the results won’t be good. They’re often seen as know-it-alls or too opinionated—the kinds of things from which they may never recover.


You don’t have to be a licensed psychologist to quickly sense the type of attitude a candidate possesses, be it positive, negative or apathetic. How they talk about past experiences and employment will give you a pretty good indication. Attitudes can be infectious; you don’t want a negative one infiltrating your organization.

Roll with the Punches

A candidate’s ability to adapt and be flexible is critical. What a candidate is brought in to address and accomplish is quite possibly going to change, at least to some extent. While this also incorporates the need for a positive attitude, you don’t want a new employee complaining that what they’re doing isn’t what they were brought in to do.

Examples, Examples, Examples

While a staffing solutions firm should be asking candidates for examples how they have satisfied soft skills, you need to ask the same of the firms you’re evaluating. After they’ve addressed the soft skills they look for in a candidate, ask them to provide examples of how they determine if a candidate possesses them.

Call on the Experts

If you have questions about what to look for in an IT staffing solutions firm, contact the staffing professionals at Some of the largest, most notable companies in the world have turned to GDT so key initiatives can be matched with the right IT professionals to drive projects to completion. GDT maintains a vast database of IT professionals who maintain the highest levels of certifications and accreditations in the industry. And they understand the importance of finding professionals with the right soft skills. In addition, the IT professionals they place have access to the talented, tenured solutions architects, engineers and professionals at GDT.

Can’t wait for 5G? The FCC has done something to speed up your wait time

By Richard Arneson

Whether you’re a dyed-in-the-wool technophile or just one of those people who has to be the first to have the latest gizmo or gadget, you’re probably eagerly anticipating 5G, which will provide for consumers a host of benefits, including faster speeds, lower latency and a more IoT-friendly wireless infrastructure. But when you hear that 5G won’t be fully deployed for another four (4) years, it kinda’ ruins the mood. Unfortunately, service providers can’t roll out 5G—or any G, for that matter—all at once. Think of the cell towers that need to be upgraded from coast to coast—it’d take almost half a million technicians working simultaneously to accomplish this feat in one fell swoop. Yes, the rollout will begin within the next couple of months, but if you’re not in one (1) of the lucky roll-out areas, you’ll have to wait…and wait…and potentially wait another four (4) years.

…to the rescue

The Federal Communications Commission (FCC) wants to do something about that waiting. And they have. On August 2nd, they voted on rules to speed up rollouts of not just 5G, but new networks, as well. These rules are known as One Touch Make Ready (OTMR), a non-descriptive abbreviation that addresses the strict, cumbersome laws in place that specify the required distance that must separate network elements attached to a pole—usually a telephone poll.

When either a new service provider enters a market, or if an existing one (1) would like to address poor connectivity in an area by adding a site, any equipment or wires already attached must be reconfigured to ensure the required distance is maintained. It’s so painful that many speculate it’s the very reason Google Fiber had to greatly throttle back its once aggressive deployment schedule.

Currently, laws related to cell towers are handled through the jurisdiction in which they reside. Resultant installations are a headache at best, a nightmare at worst, and pole access to new competitors is delegated to “least important” status. Because accommodating new competitors is reliant on the reconfiguration of equipment and wiring by incumbent carriers, the process is, as you probably imagined, not one of their higher priorities.

According to FCC Chairman Ajit Pai: “For a competitive entrant, especially a small company, breaking into the market can be hard, if not impossible, if your business plan relies on other entities to make room for you on those poles. Today, a broadband provider that wants to attach fiber or other equipment to a pole first must wait for, and pay for, each existing attacher [installer] to sequentially move existing equipment and wires. This can take months. And the bill for multiple truck rolls adds up. For companies of any size, pole attachment problems represent one of the biggest barriers to broadband deployment.”

In addition to 5G, the FCC believes this new rule will mean 8.3 million additional premises will be passed with fiber, totaling in excess of $12.6 billion spent on those projects. In addition to faster installations of cell sites, the new rules will greatly enhance the fiber density related to wireless backhaul.

Mobility Experts with answers

If you have questions about your organization’s current mobility strategy (or the one you’d like to implement) and how 5G will affect it, contact GDT’s Mobility Solutions experts at They’re comprised of experienced solutions architects and engineers who have implemented mobility solutions for some of the largest organizations in the world. They’d love to hear from you.

Usually just a minor annoyance, the Flash Player update can now result in a major ordeal

By Richard Arneson

It’s one (1) of the most common speed bumps on the Internet highway—the Adobe Flash Player update message. It’s unexpected and never welcome—a little like a tornado, but not quite that bad. It may not trump some of the other digital speed bumps, like the Windows update you have to sit through after you’ve hit “Shut Down” on your computer (you know, the one that usually occurs at 5:30 on Friday afternoon), but it still serves as one (1) of computing’s many figurative mosquitoes. But while the Flash update has only proven to be a minor annoyance, you can now place it in another category―crippling.

Palo Alto Networks, the Santa Clara, CA-based cybersecurity firm, discovered earlier this month that a fake Flash updater has been loading malware on networks since early August. Here’s the interesting part—it actually installs a legitimate Flash update. But before you think cyber attackers have going soft, they’re downloading Flash for distraction purposes only. And while the update is taking place, another upload is occurring—the installation of a bot named XMRig, which mines a cryptocurrency named Monero. Once the install(s) are complete, the user, unbeknownst to them, begins mining Monero. And there you have it—cryptojacking.

Cryptojacking with XMRig

Once the phony Flash update is launched, the user is directed to a fake URL that, of course, isn’t connected to an Adobe server. After the Flash update is installed, XMRig accesses a Monero mining pool—and the fun begins. XMRig begins mining Monero from infected, networked computers as unknowing users merrily work along, completing their day-to-day tasks. Keep in mind that Monero is a legitimate form of cryptocurrency. Like Bitcoin for ransomware, Monero is the cryptocurrency of choice for cryptojacking. Monero’s website claims it is “the leading cryptocurrency with a focus on private and censorship-resistant transactions.” (Unlike Bitcoin, Monero doesn’t require the recipient to disclose their wallet address to receive payment(s)).

Let’s back up a bit—here’s how crypto mining works

It can be argued that cryptojacking has replaced ransomware as cyberattackers’ malevolent deed of choice. It’s important to remember, though, that cryptocurrency mining is legal—it’s how cryptocurrency works. Mining is the process of finding, then adding transactions to, currencies’ public ledger. The chain of transactions is called the block—hence the name blockchain.

A blockchain’s ledger isn’t housed in one (1) centralized location. Instead, it is simultaneously managed through duplicate databases across a network of computers—millions of them. Encryption controls and protects the creation of new coins and the transfer of funds, without disclosing ownership. The transactions enter circulation through mining, which basically turns computing resources into coins. Anybody can mine cryptocurrency by downloading open-source mining software, which allows their computer to mine, or account for, the currency. Mining solves a mathematical problem associated with each transaction, which verifies that the sender’s account can cover the payment, determines to which wallet the payment should be made, and updates the all-important ledger. The first one to solve the problem gets paid a commission in the particular currency it’s mining.

In cryptocurrency’s nascency, the computing power needed was minimal. Basically, anybody could do it. Now the computing power needed to mine cryptocurrency is considerable, with miners requiring expensive, purpose-built, super powerful computers to do so. If they don’t have that, they can forget making decent miner money. But building enough computing resources needed to profitably mine cryptocurrency today is expensive, often cost prohibitive. In cryptojacking, however, the cyber attackers network together infected computers and utilize their computing power without spending a dime. In turn, the victim’s infected computer is busy surreptitiously mining cryptocurrency and slowing to a crawl. The bad guys enjoy pure net revenue.

Got question? Call on the Security experts

To find out more about cryptojacking, ransomware, malware, Trojans, and the host of security-related issues your organization needs to consider and fend off, contact GDT’s tenured and talented security analysts at From their Security- and Network Operations Centers, they manage, monitor and protect the networks of some of the most notable enterprises, service providers, healthcare organizations and government agencies in the world. They’d love to hear from you

Get more information about network security here:

Gen V

Sexy, yes, but potentially dangerous

Tetration—do you know its meaning?

It’s in their DNA

Rx for IT departments—a security check-up

When SOC plays second fiddle to NOC, you could be in for an expensive tune

How to protect against Ransomware


Hybrid Cloud Conundrums? Consider HPE GreenLake Flex Cap

By Richard Arneson

If you need to purchase a container to hold what you’re estimating is between 48 and 60 ounces of liquid, are you going to buy the 50- or 70- ounce container? Yes, you’ll play it safe and get the bigger one, but you’ll spend more money and it will take up more space on the shelf. And it won’t be very satisfying, especially if you miscalculated and only had thirty-six (36) ounces to begin with. In short, you didn’t do a very good job of right-sizing your container solution. And that’s exactly what IT administrators have struggled with for years, whether it’s bandwidth, equipment or any type of technology of solution. Unfortunately, right-sizing an IT recipe usually requires a dash of hope.

Pay-as-you-go trumps the guesswork of right-sizing

HPE GreenLake Flex Capacity is a hybrid cloud solution that gives customers a public cloud experience, but with the peace of mind that often comes with on-premises deployments. It’s a pay-as-you-go solution, so right-sizing can become the dinosaur of high-tech industry. HPE GreenLake Flex Cap provides capacity on-demand and scales quickly to meet growth needs, but without the wait times–often long ones–that come with circuit provisioning.

And it gets better―management is greatly simplified; customers can manage all their cloud resources, and in the environment of their choosing. HPE GreenLake customers enjoy:

  • Limited risk by maintaining certain workloads on-prem
  • Better and more accurate alignment of cash flows, no upfront costs and a pay-as-you-go model
  • Savings by no longer wasting dollars on circuit overprovisioning
  • Immediate scalability to address the needs of your network
  • Real-time failure alerts with remediation recommendations
  • The ability to perfectly size capacity

And with these integrated, turnkey packages, your organization can enjoy HPE GreenLake Flex Cap even faster


GreenLake for Microsoft Azure or Amazon Web Services (AWS)

Whether you’re utilizing Microsoft Azure or Amazon Web Services (AWS) for your cloud environment, GreenLake Flex Cap can provide turnkey controls for performance, compliance and costs.

GreenLake for SAP HANA

SAP HANA customers can enjoy a fully managed, on-prem appliance with right-sized SAP®-certified hardware and services to satisfy workload performance and availability. As the leading supplier of SAP infrastructure, HPE GreenLake for SAP HANA delivers the performance, control and security needed for the most demanding of mission-critical applications.

GreenLake for Big Data

GreenLake for Big Data accelerates time-to-value with asymmetric or symmetrical configurations, and there are no security issues or risks associated with repatriation once datasets are shipped to third-party data centers.

GreenLake for EDB Postgres

Reduce TCO and simplify operations with this Oracle-compatible1 open-source database platform. Your teams will be able to better focus on applications and insights that will drive business outcomes.

GreenLake for Backup

Pay for exactly what you back up. Yes, it’s that simple. GreenLake for backup includes Commvault software that’s pre-integrated on your choice of HPE StoreOnce or HPE 3PAR Storage.

Now combine GreenLake with HPE Pointnext

HPE Pointnext can not only monitor and manage the entire solution, but it provides customers with a portal that delivers key analytics and detailed consumption metrics.

Questions? Call on the experts

If you have additional questions or need more information about HPE GreenLake Flex Capacity and the many benefits it can provide your IT organization, contact one of the talented and tenured solutions architects or engineers at GDT. They can be reached at or at They’d love to hear from you.

Answer: You get a solution better than your current one

By Richard Arneson

Question: What happens when you combine AI (artificial intelligence) and Wi-Fi? Apologies to Alex Trebek and Jeopardy, but this particular solution is so cool, exciting and effective that I couldn’t bury the lead and had to skip straight to the answer.

Wi-Fi has been part of our lexicon and lifestyle since 2003 and, no question, it was revolutionary. Connecting your computer to the network without wires…could it get any better than that? The technology remained fairly stagnant and unchanged for several years, however. While any claims that Wi-Fi was stuck in the Dark Ages would have been a gross exaggeration, but it was beginning to feel a bit stale. And with that came dissatisfaction, user (un)friendly experiences and, ultimately, the worst adjective consumers can attach to a technology–frustrating.

It all changed in 2007, though. The launch of the iPhone, including its phenomenally successful marketing campaign, resulted in consumers snapping them up like snow cones on a hot summer day. Hello, smart device. Then came other smart devices—tablets, watches, doorbells, thermostats, et al.–which generate thirteen times (13x) more traffic than non-smart ones. And then came Mist.

Mist Systems

Based in Cupertino, CA., four-year-old Mist Systems was funded by several top investors, most notably Cisco Investments. The folks at Mist wondered why 12.6 billion smart devices worldwide were relying on a technology that wasn’t terribly, well, smart. They set out to develop a learning wireless LAN solution that would, among other features, replace time-consuming, often frustrating manual tasks with proactive automation.

Mist began with three (3) end goals in mind: Improve network reliability, transform IT services and enhance the user experience

Mist set out to fix the ills of Radio Resource Management (RRM), which manages several characteristics inherent in wireless communications, such as whether there is any co-channel interference or signal issues. The problem with RRM is that it has always been hamstrung from a lack of user insights due to poor data collection. Not so with Mist, which utilizes AI to create a Wi-Fi solution that heals itself.

Mist constantly collects, per user, RF (radio frequency) information regarding coverage, throughput, capacity and performance. The collected data is analyzed to proactively through AI make changes to enhance the user experience.

Service Level Expectations (SLEs)

Mist offers the only Wi-Fi solution to the marketplace that allows for SLEs that clients can customize based on their needs. In addition to traditional metrics, such as coverage, throughput, uptime and latency, Mist customers can set, monitor and enforce their defined SLEs, which allows them to better understand just how issues, such as jitter, packet loss and latency are adversely affecting end users.

Here’s why Mist is truly refreshing

Mist offers the only enterprise-class wireless solution that is powered by a microservices cloud architecture and doesn’t require a WLAN Controller. As a result, customers enjoy enhanced agility and scalability from an AI engine that gathers data and insight, and utilizes automation to deliver a self-healing Wi-Fi solution.

Mist introduces customers to MARVIS, their virtual network assistant built on AI, deep learning and machine learning. By using Natural Language Processing (NLP), Marvis provides IT administrators with immediate answers, so time wasted digging for them with Command Line Interfaces (CLIs) or dashboards can be better served on other tasks or projects.

Mist can lay claim to another first―they offer the only Enterprise Bluetooth Low Energy (BLE) solution that doesn’t require manual calibration. And additional beacons aren’t required; Mist developed proprietary virtual BLEs, which through a simple mouse click or API can be moved around as needed.

Mist’s solution provides what Wi-Fi has always aspired to be, and then some―a predictable, reliable and self-healing Wi-Fi solution based on extensive data collection, AI and machine learning.

There are no dumb smart questions

If you have questions about smart devices, IoT, Wi-Fi solutions―including Mist Systems’― contact the talented, tenured solutions architects and engineers at GDT’s IoT and Mobility Solutions practice. They can be reached at They’d love to hear from you.

For more about Mobility Solutions and IoT…

Click here to get more information about mobility solutions, and here to watch a video about how GDT delivered a secure mobility solution to a large retailer.

The 6 (correctly spelled) R’s of a Cloud Migration

By Richard Arneson

It’s always confounded me that two (2) of the three (3) R’s of education―reading, writing and arithmetic—were spelled wrong. Whomever coined the phrase was obviously trying to set students up to fail at spelling. Thankfully, we work in an industry that understands the proper spelling of R words; in this case, I’m referring to the six (6) R’s of a cloud migration. That’s not to say you have to pick just one (1), though. It’s not an either/or scenario. Your organization might require, if you want to fully enjoy the cloud and all it has to offer, several of the following types of cloud migrations. That’s where the experience and expertise comes in.

Re-host (aka Lift and Shift)

Re-hosting applications to the cloud is common, especially if a company wants to ramp up their cloud migration as quickly as possible. For instance, there might be a certain business case that demands a fast deployment. In re-hosting, applications are re-hosted in the cloud, even if cloud optimizations haven’t taken place. As a result, companies can enjoy quick savings, but not everything they might want due to the abbreviated time line.

If workloads and applications have been re-hosted, it can make it easier to optimize and re-architect in the future. Amazon Web Services (AWS) has a solution for this called Snowball, which securely transfers data at petabyte-scale into and out of their cloud. Also, their VM Import/Export automated transfer tool allows you to utilize existing VM purchases by easily importing them into the AWS Cloud.

Re-platform (aka Lift, Shift and Tweak)

Re-platforming takes the re-hosting approach, but also addresses a common issue―not all applications can be migrated to the cloud. While an application may not be able to run on an IaaS platform, it may be able to run on IaaS servers. In this case, an emulator can be used, which runs in the cloud of the provider you choose (AWS, Microsoft Azure, Google Cloud). The applications will appear no different to end users―same front end, interfaces, look and feel.  If rebuilding a current system is cost prohibitive, you can still enjoy cloud technologies on a legacy infrastructure through re-platforming.

Re-architect (aka Re-write)

Re-architecting is like purchasing a Mercedes with all the options and features attached. Yes, it’ll cost you, but if you’re looking for a superior level of performance, business continuity, flexibility and scalability, this will be your best option. It’s a good bet that companies touting and enjoying tremendous cloud benefits have utilized this migration strategy.

And if you initially choose to re-host an application, that doesn’t mean you can’t re-architect it in the future. If you’d like, re-host now, re-architect later. Doing so can reduce the project’s complexity by separating application re-design from the cloud migration.

Re-purchase (aka Drop and Shop)

Think Salesforce. Think SaaS. Re-purchasing is simply a matter of changing the licensing. In the case of Salesforce, you’re going from a legacy CRM to a cloud option. You’ll save both hard and soft costs, such as the time it takes an IT staffer to manage, maintain and monitor the application.

Retire (aka Curbside pickup)

One of the key elements of creating a cloud migration strategy is to first conduct a thorough assessment of your existing environment, applications, workloads, etc. If done properly and comprehensively, the assessment will be able to determine which IT elements can be hauled out to the trash. And with retirement comes cost savings.

Retain (aka You can stay…for a while)

If you’re not ready to move a particular application to the cloud for whatever reason (depreciation, performance concerns, gut feeling…), you may want to keep the status quo for a while. That’s not to say you’ll want to retain it forever. The more comfortable you become with the cloud and a migration, the sooner you’ll probably begin to move applications onto the Retire List.

It all starts with Expertise―then an Assessment

Moving to the cloud is a big move; it might be the biggest move of your IT career. If you don’t have the right cloud skill sets, expertise and experience on staff, you may soon be wondering if the cloud is all it’s cracked up to be.

That’s why turning to experienced Cloud experts like those at GDT can help make your cloud dreams a reality. They hold the highest cloud certifications in the industry and are experienced delivering solutions from GDT’s key cloud partners―AWS, Microsoft Azure and Google Cloud. They can be reached at They’d love to hear from you.


If you’d like to learn more about the cloud, migrating to it, considerations prior to a migration, or a host of other cloud-related topics, you can find them here:

Are you Cloud Ready?

Calculating the costs–soft and hard–of a cloud migration

Migrating to the Cloud? Consider the following

And learn how GDT’s Cloud Team helped a utility company achieve what they’d wanted for a long, long time:

A utility company reaps the benefits of the cloud…finally

Do you need staff aug or outsourcing? Or both?

By Richard Arneson

Yes, staff augmentation and outsourcing are different, but if you’re needing either, you’ll soon be enjoying a cost-effective solution that will―if the right professionals are secured, of course―help you meet performance and project goals. Choosing either really comes down to one (1) thing: what exactly are you trying to accomplish by bringing additional personnel into your organization, if only for a set amount of time?

It can be argued that the IT industry is the perfect one (1) for staff aug and outsourcing. Finding the precise talent with the right skill sets and certifications, and who understands specific technologies and architectures, can be needle in a haystack stuff. And pulling your personnel from their responsibilities to go in search of the right talent can take weeks, usually months; and they might not find that person at all. And with deadlines nearing, companies often have to settle for somebody who isn’t qualified, which leaves them praying that their new hire will fit the bill―many times they don’t.

The following will outline both staff augmentation and outsourcing, and give you a better idea which solution is perfect for your particular project(s), goals and environment.

Staff Augmentation

If you’re in search of a temporary addition to your IT organization with the right skill sets to augment your staff’s capabilities on a particular project or set of them, it sounds like staff aug is what you’re needing. It’s like a major league team making a trading deadline acquisition to pick up that last cog they hope will get them through the playoffs and help them walk off with the World Series trophy. Yes, they could sign that person to a multi-year contract once the season’s over, but they need their precise talent now (left-handed pitcher with a wicked slider that produces more ground balls). You might need an engineer who’s both a CCIE and ACMP with over ten (10) years of experience, has worked on projects and networks with at least a thousand network assets, and has five (5) years of DevOps experience.


Take that baseball team that needs a left-handed pitcher. Let’s say they didn’t win the World Series, so the next season they’ve decided to trade their sub-0.100 hitting infield for four (4) fielders who will give them more punch at the plate. And while this would be highly unusual in baseball, they’ve signed these four (4) infielders to one-year contracts. Project outsourcing in IT takes a similar approach. A team of professionals are brought in to manage and oversee a particular project from start to finish. They’re not augmenting staff to assist in day-to-day functions; outsourced personnel handle the project soup to nuts.

Best fits for both

While a single IT professional or team of them might be secured for a particular project, staff augmentation definitely provides far more flexibility. It’s not always that they’re assigned to a particular project; they may work on a variety of them that can benefit from their experience and expertise.

Outsourcing is more effective when a single project requires completion. You wouldn’t pull outsourced personnel to work on something unrelated to the project for which they’ve been hired. The specific project will be outlined in the contract. The team hired to complete it is expected to work solely on that project. Not much room for flexibility. If you’ve overestimated the personnel needed, you could be contractually obligated to pay a team of five (5) professionals when only four (4) were required.

And, of course, you may determine that both are needed. Either way, it’s critically important to secure the right personnel. Without them, neither approach will work.

Call on the Experts

If you have questions about augmenting your IT staff or if you require project outsourcing, talk to the best and brightest the industry has to offer, contact the GDT Staffing Services professionals at Some of the largest, most notable companies in the world have turned to GDT so key initiatives can be matched with IT professionals who can help drive those projects to completion. They possess years of IT experience and expertise, and maintain a vast network of IT professionals who maintain the highest levels of certifications in the industry. In addition, the IT professionals they place have access to the talented, tenured solutions architects and engineers at GDT. They’d love to hear from you.

Security Concerns about SD-WAN? Not to worry…it’s probably a step-up from what you currently have

By Richard Arneson

You wouldn’t be doing your job if IT security wasn’t at your mind’s forefront, especially if you’re looking to move to a new technology like SD-WAN. Hopefully the It won’t happen to us mindset has given way to a It can happen to anybody—we better be fully protected way of thinking. If it hasn’t, read about a few of the more publicized 2018 malware and ransomware launches, including data breaches and computer vulnerabilities like Spectre and Meltdown. And that’s not to mention the seven million (7,000,000) data records that are stolen each and every day, which translates to eighty-one (81) records every second. Yes, it’s all scary, but manageable if you’re considering a SD-WAN migration.

A security plan with SD-WAN

While we’ve all heard that familiarity breeds contempt, it also provides contentment. If your organization’s data has been traversing an MPLS network and you’ve kept your applications in-house for years, the idea of moving to a new WAN approach and jettisoning the old will no doubt make your heart skip a few beats. And the notion of entrusting mission critical applications and workloads to SD-WAN may not be easing your concerns, even if it will provide your organization with a spate of benefits, including:

  • Lower costs,
  • Easier deployments,
  • Faster network traffic,
  • More bandwidth,
  • Enhanced visibility into your network, and
  • Greater flexibility.

But here’s the great news―if deployed correctly, SD-WAN will provide security benefits you haven’t enjoyed from your legacy architecture. You can read about them here:

WAN Segmentation

With SD-WAN you can segment your network, which allows you to limit and damage in the event your organization falls victim to a security breach. For instance, you can deploy smaller, overlay networks that can be segmented by department, application or technology, such as video or VoIP.

Encryption, Centralized Management and Better Control

SD-WAN not only natively supports end-to-end encryption, but, because the entire network can be centrally managed through Controller software, users are provided with the ability to manage security, control policies and compliance more easily and effectively. When a new site requires turn-up, it’s automatically authenticated, policies are downloaded, then access is granted. Conversely, legacy architectures require security to be handled by edge devices and firewalls. This usually means a certain level of skill sets needs to be on-site for turn-up. And the more remote offices, the greater the headaches, costs and potentially security issues. SD-WAN provides more visibility into the entire network; with better visibility comes better control; with better control comes enhanced security.

Got question? Call on the SD-WAN experts

To find out more about SD-WAN and the many benefits it can provide your organization, including enhanced security, contact GDT’s tenured SD-WAN engineers and solutions architects. They’ve implemented SD-WAN solutions for some of the largest enterprise networks and service providers in the world. They’d love to hear from you.


You can read much more about SD-WAN, including dispelling the myths surrounding it, how it fits with IoT, its separation of the control and data planes, demystifying its overlay and underlay networks, its relationship with SDN, and why the SD-WAN market will grow by 1200% by 2021.

And to see how GDT’s SD-WAN experts delivered the perfect solution to a global software company, see it here.

Brazil now, U.S. later?

By Richard Arneson

Hopefully the answer is a resounding “NO”, but the Brazilian banking industry has recently been hit hard by “GhostDNS”, so named by China-based security research firm NetLab, which discovered the sinister malware in September. The phishing infection has hijacked over 100,000 routers in South America’s largest country and hoarded customer login information for many of its largest financial services firms. It’s estimated that it has been running undetected since June of this year.

Domain Name Service (DNS) simplifies the lookup of IP addresses associated with a company’s domain name. Users can remember, but servers don’t understand our nomenclature. They need an IP address. Without DNS, the Internet, which processes billions of requests at any given moment, would grind to a halt. Imagine having to keep track of all the IP addresses associated with the thousands of websites you visit, then typing them into a browser.

Here’s how GhostDNS works

GhostDNS is spread through remote access vulnerabilities and can run on over seventy (70) different types of routers. NetLab identified over a hundred (100) different attack scripts that were deployed and discovered them running on several high-profile cloud hosting providers, including Amazon, Google and Oracle.

The attack scripts hijacked organizations’ router settings, which resulted in their traffic being sent to an alternative DNS service. This re-directed traffic headed to rogue, or phony, sites designed to mimic the landing pages of Brazil’s major banks (some telecom companies, ISPs and media outlets were targeted, as well). Users believed they were on “real” landing pages, then happily typed in their user name and password.

While GhostDNS malware has primarily affected routers in Brazil, which is one (1) of the top three (3) countries affected by botnet infections (India and China rank 1 and 2, respectively), the FBI is working to ensure it hasn’t spread to the United States. If you believe your organization may have been infected by GhostDNS, the FBI has provided an easy online way to determine that very issue here. Just type your DNS information into the search box. it’s that simple.

A four-pronged module approach to evil

  1. A DNSChanger module attacks routers that, based on collected information, are deemed target-worthy due to weak or unchanged login credentials or passwords.
  2. A Web Admin module provides1 a portal, of sorts, where attackers can access the phony login page.
  3. A Rogue DNS module resolves the domain names to which users believe they’re heading. Again, most of these domain names are of Brazilian financial institutions.
  4. The Phishing Web module is initiated after the goal of the Rogue DNS module has been satisfied. It then steers the fake DNS server to the end user.

As the result of NetLab’s detective work, the further spreading of GhostDNS appears to have been reined in. Networks have been shut down so remediation and enhanced security measures can be implemented. But rest assured, something as big, or bigger, will soon take its place.

IT Security questions? Turn to the Experts

GDT is a 22-year-old network and systems integrator that employs some of the most talented and tenured security analysts, solutions architects and engineers in the industry. They design, build and deploy a wide array of solutions, including managed security services and professional services. They manage GDT’s 24x7x365 Network Operations Center (NOC) and Security Operations Center (SOC) and oversee the networks and network security for some of the most notable enterprises, service providers and government agencies in the world. You can contact them at They’d love to hear from you.1