Solutions Blog

Turbonomic Event at GDT’s Innovation Campus

On March 9th, GDT partner Turbonomic presented information on how they automate workloads for Hybrid Cloud environments. Their platform simultaneously optimizes performance, cost, and compliance in real-time. Great company, great platform, great meeting!

GDT Innovation Campus hosts Successful Cisco Cyber Threat Response Clinic

Attendees at GDT’s Innovation Campus enjoyed Part 2 of Cisco security product training in a lab environment. They learned, among other things, how environments get compromised, including how to utilize Cisco products and solutions to uncover breaches and respond to them effectively. They experienced cyber attack situations in a virtual lab environment, and were able to play both attacker and defender! They received Core certification, as well.

Great event; well attended, as usual.

GDT Hosts VMware NSX Micro Ninja Training

 

On March 5th, GDT hosted a comprehensive, fast-paced training course that focused on installing, configuring, and managing VMware NSX™. This course covered VMware NSX as a part of the software-defined data center platform, and functionality operating at Layer 2 through Layer 7 of the OSI model. Hands-on lab activities were included to help support attendees’ understanding of VMware NSX features, functionality, and on-going management. Very well attended; great event!

GDT Lunch & Learn on Continuous Integration/Continuous Deployment (CI/CD) Tools

GDT Network Systems Engineer Robert Powers presented, as part of the GDT Agile Operations (DevOps) team’s weekly Lunch & Learn series, information about one of the key tenants of application development’s golden age—the Continuous Integration / Continuous Deployment (CI/CD) model. It allows developers to release software more quickly and with fewer issues. Robert presents information on the tools and processes needed to build a CI/CD pipeline, and demonstrates the value of the ‘release early, release often’ mantra.

DNS Expert and Noted Author Cricket Liu Presents at GDT’s Innovation Campus

Considered by many IT professionals as “The godfather of DNS”

In the event you weren’t fortunate enough to attend in person, on February 22nd Cricket Lui, the author of DNS and BIND and a preeminent expert on DNS, delivered an outstanding presentation on a very timely subject: network threats, such as DDos and malware attacks. Both funny and super informative, Cricket delivered his presentation in such a way that anyone from engineers to those with far less technical acumen could understand and enjoy it.

Here’s the good news−you can view the entire presentation:

 

 

 

Are You Ready for GDPR?

by Moe Janmohammad, GDT Cybersecurity Analyst

On May 25th, the new General Data Protection Regulation (GDPR) from the European Union (EU) will go into effect. The regulations are designed to protect the data of EU citizens, and penalties for non-compliance are steep (up to the greater of 20 million Euros or 4% of total gross revenue).  Even if your company isn’t based in the EU, the regulations could still affect your information security policies. To help you better prepare your IT Security teams, here are the questions you should be asking yourself these 4 questions:

  1.  What is defined as personal data?

The GDPR has definitions of what personal data consists of, which includes “an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person.” While IP addresses are not specifically included, they do include geolocation and provider information that could potentially be used to identify individuals. You’ll need to determine what data you’re currently holding.

  1. Where are we storing personal data?

Twenty years ago, personal data was stored in a corporate data center. Today, that data can be stored on edge devices, mobile devices, on-premise servers, or even the public cloud. Data in 2018 is extremely mobile and fragmented. Your IT team must have full visibility into where the generated data is being stored. If you don’t have this information, you might have a problem.

  1. Who can access the personal data?

GDPR restricts data usage strictly covered by the initial consent agreement. And if you share that data with a third party, it is your company’s responsibility to ensure they are only using the data for the purpose covered by the initial consent. You need to know who is accessing the data and what they’re doing with it, even if they are from outside your organization.

  1. How do we ensure it is protected?

This is the hardest question to ask, as solutions get expensive quickly. A large number of data breaches begin with stolen credentials. Monitoring end user behavior for anomalies can help identify risk and provide early warning signs for a potential data breach. Automated defenses can be set up to revoke read/write access for potentially compromised users, or for sinkholing their outbound traffic to prevent exfiltration.

Asking yourself the aforementioned questions is important, but being able to address and answer them is vital. If you can’t, you might be setting your company up for an expensive penalty.

Beginning 2018 with a Meltdown

by Moe Janmohammad, GDT Cybersecurity Analyst

2017 was one of the worst years on record for data breaches, computer vulnerabilities and malware attacks. Based on the first four days of 2018, those numbers might be eclipsed after security researchers uncovered a few vulnerabilities in virtually all processors made since 1995. The two vulnerabilities are Meltdown, which has been isolated to only Intel chips, and Spectre, which affects all modern chipsets.

How does it work?

In modern computer architecture, the kernel, which is the central part of an operating system, controls, well, everything. It allows access to system resources by controlling low level hardware, such as programs run by users in a restricted “permission only” environment, also known as Userland. The kernel prevents an application from modifying other applications’ memory stacks, or even modifying the its own memory stacks. The CPU provides hardware support to separate Userland  from kernel, so programs won’t be able to take over the kernel and threaten security.

The CPU hardware relies on privilege levels, commonly known as rings, ranked in privilege from 0 – 3;  Ring O is the most privileged, 3 the least. In an ideal, secure world, Ring 0 can modify processes running in higher rings, but not vice versa. The processor will block Ring 3, or Userland, applications from accessing kernel memory in Ring 0. Userland applications are not even allowed to see details regarding memory, the allocations, or the address space because all of those could leak critical details about the system and compromise security.

How does Meltdown work?

Modern CPU’s also have a technology called Speculative Execution, which occurs when a processor anticipates what the next few instructions are supposed to accomplish. It then breaks them into smaller instructions and executes them in a (potentially) different order than the program intended. Most instructions are computationally independent, so the order in which they’re run should not matter. In some instances, a speculatively-executed instruction will reference the CPU’s memory cache, instead of the memory space referenced in the instruction, creating a potential side channel that could leak address spaces in Ring 0. Accessing CPU cache is significantly faster than pulling it from the chips, so the shortcut for common values can mean a significant increase in performance at the risk of security.

Basically, if the operations are run out of order, a later instruction that accesses restricted information could be run first, then store that restricted memory in the CPU cache. An earlier instruction that may not have access to that section of memory can now read it clearly in the CPU cache. It breaks down the most fundamental isolation between userland and kernel memory.

How does Spectre work?

Spectre regards multiple vulnerabilities, all of which are completely unprecedented and are a result of the way modern chips are designed. The public won’t have a complete fix until the next generation of chips are released. It is significantly more difficult to exploit, and equally difficult to fix. Spectre is a completely new class of attack, and no one is certain of the extent of the full security consequences. As well, defending against Spectre is not fully understood. Software and CPU microcode can only fix so much, and are only band-aid solutions at present.

What does this mean?

The vulnerabilities require local access to the machines, meaning an attacker must already have access and privilege to execute code on the machine.

These are information disclosure vulnerabilities, meaning that memory cannot be modified, only read. They cannot force code execution and cannot overwrite Ring 0 memory.

The main concern is shared hosting environments, where multiple users are hosting multiple virtual machines on shared hardware. Breaking out of virtualized machines and attacking the underlying hypervisor has always been difficult, but these vulnerabilities may make that task easier. One attacker with an account on shared server hardware could collect information on other users, and over time build a better profile for a future attack. Cloud service providers like Amazon, Microsoft, and Google have updated their hosting servers’ underlying OS and are monitoring the performance degradation (to date it has been extremely minimal).

How can I guard against Meltdown and Spectre?

As Tom Petty wrote, “The waiting is the hardest part.” And that’s what you’ll have to do—wait for patches for all other devices to roll out. These vulnerabilities were supposed to be published later this year, but were leaked early. All vendors are in the process of rolling out updates to mitigate these vulnerabilities. However, not all have had time to fully test these updates. Just make sure—as in make certain!—that all users keep their devices up-to-date with the latest OS and security patches.

 

GDT Lunch & Learn on Web Security

GDT Consulting Engineer Nate Atkinson delivers, as part of the GDT DevOps team’s weekly Lunch & Learn series, a great, basic overview on web security, including some fundamentals, such as privacy, authentication, and integrity. He discusses the importance of data security, both at rest and in flight, and common strategies for managing secure data. Also included: information on how to deal with the sometimes tricky issue of certificate management, full drive encryption, file system & block layer encryption, application layer encryption, and ways to secure data in transit: VPN, SSL/TLS, application layer encryption and encrypted messaging.

 

GDT Lunch & Learn on Automation

GDT’s Agile Operations Developer Lead Brett Kugler presented, as part of GDT’s weekly Lunch & Learn series, a great, informative presentation on Automation. It’s a hot topic in IT today, and there are many ways teams can incorporate automation in their environment. Brett discusses various automation techniques, including orchestration, provisioning and process.  Join Brett and better understand how to build efficiencies throughout your organization.

SSL inspection devices causing you headaches? There’s an app- layer IETF draft for that

By Nic Hollins, GDT Network Security Engineer

A draft for a new standard has been created by the Internet Engineering Task Force (IETF). It effectually allows people to avoid the scrutiny of surveillance equipment on their networks to perpetuate secure connections.

Three Cisco employees have provided a working draft for the purposed standard. Devices known as “middleboxes” intercept and decrypt traffic to bolster network security. Typically, these types of devices are used by ISPs, corporations, and other organizations to monitor users. However, due to their inherent function of terminating TLS connections, they also can break application services. The new standard would be known as “ATLS” (Application Layer TLS) because it moves the TLS handshake up to the application layer of the OSI model, by transporting TLS records in HTTP message bodies. This will allow private and secure connections to persist between clients and servers, even if the traffic between them is being intercepted by middleboxes (this works by not trusting said equipment).

Middleboxes are deployed to monitor end-user internet activity, inspect application and system software traffic for threats, along with other tasks. For middleboxes to inspect workstation and networking device traffic they must be configured by system administrators as trusted certificate authorities. This enables them to decrypt TLS/SSL-protected connections like HTTPS. In an ideal world, a logical and centrally controlled topology would allow this to be done with ease. However, in the real world of enterprise networking, you often find more sprawling and convoluted network topologies, some of which defy logic itself. Sprinkle in a BYOD (bring your own device) policy and you have a recipe for complexity.

Keeping employees and their gizmos, mission-critical appliances, servers, etc. connected via middleboxes while managing network or configuration updates and other troubleshooting tasks not only keeps an IT department spinning plates, it may lead to their personnel being overextended. This new IETF proposal would provide a standardized mechanism to securely pass data through middleboxes without the added man-hours and loss of productivity that occur when configuring custom root certificate authorities (and their ancillary tasks).  Ultimately the standard will be fully compatible with both past and future version of TLS in an effort to minimize the need for reconfiguration.

What are the Pros?

The purposed ATLS standard cites that it will “avoid introducing TLS protocol handling logic or semantics into the HTTP application layer i.e. TLS protocol knowledge and logic is handled by the TLS stack, HTTP is just a dumb transport.” Besides the financial benefits that would be gained via productivity and efficiency, system administrators may also save a piece of their sanity along with the added bonus of making their job easier. And according to the author’s more technical perks include:

There are several benefits to using a standard TLS software stack to establish an application layer secure communications channel between a client and a service. These include:

  • no need to define a new cryptographic negotiation and exchange protocol between client and service
  • automatically benefit from new cipher suites by simply upgrading the TLS software stack
  • automaticaly benefit from new features, bugfixes, etc. in TLS software stack upgrades.

Essentially the concept is that a client will create two independent TLS connections, one at the transport layer directly with the service, possibly via a middlebox, and one at the application layer. As a fallback, a client could use ATLS only if the transport layer connection is broken down due to middlebox interference. “TLS sessions with multiple clients are tracked through an identifier in JSON messages sent in POST requests, and the approach would result in a new HTTP content type: application/atls+json.”

Finally, it is worth keeping in mind that because the security considerations to this concept are still being worked through the relevant section is listed as “To do”.  For more information, you may read the draft published on the IETF’s site.