On May 25th, the new General Data Protection Regulation (GDPR) from the European Union (EU) will go into effect. The regulations are designed to protect the data of EU citizens, and penalties for non-compliance are steep (up to the greater of 20 million Euros or 4% of total gross revenue). Even if your company isn’t based in the EU, the regulations could still affect your information security policies. To help you better prepare your IT Security teams, here are the questions you should be asking yourself these 4 questions:
What is defined as personal data?
The GDPR has definitions of what personal data consists of, which includes “an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person.” While IP addresses are not specifically included, they do include geolocation and provider information that could potentially be used to identify individuals. You’ll need to determine what data you’re currently holding.
Where are we storing personal data?
Twenty years ago, personal data was stored in a corporate data center. Today, that data can be stored on edge devices, mobile devices, on-premise servers, or even the public cloud. Data in 2018 is extremely mobile and fragmented. Your IT team must have full visibility into where the generated data is being stored. If you don’t have this information, you might have a problem.
Who can access the personal data?
GDPR restricts data usage strictly covered by the initial consent agreement. And if you share that data with a third party, it is your company’s responsibility to ensure they are only using the data for the purpose covered by the initial consent. You need to know who is accessing the data and what they’re doing with it, even if they are from outside your organization.
How do we ensure it is protected?
This is the hardest question to ask, as solutions get expensive quickly. A large number of data breaches begin with stolen credentials. Monitoring end user behavior for anomalies can help identify risk and provide early warning signs for a potential data breach. Automated defenses can be set up to revoke read/write access for potentially compromised users, or for sinkholing their outbound traffic to prevent exfiltration.
Asking yourself the aforementioned questions is important, but being able to address and answer them is vital. If you can’t, you might be setting your company up for an expensive penalty.
2017 was one of the worst years on record for data breaches, computer vulnerabilities and malware attacks. Based on the first four days of 2018, those numbers might be eclipsed after security researchers uncovered a few vulnerabilities in virtually all processors made since 1995. The two vulnerabilities are Meltdown, which has been isolated to only Intel chips, and Spectre, which affects all modern chipsets.
How does it work?
In modern computer architecture, the kernel, which is the central part of an operating system, controls, well, everything. It allows access to system resources by controlling low level hardware, such as programs run by users in a restricted “permission only” environment, also known as Userland. The kernel prevents an application from modifying other applications’ memory stacks, or even modifying the its own memory stacks. The CPU provides hardware support to separate Userland from kernel, so programs won’t be able to take over the kernel and threaten security.
The CPU hardware relies on privilege levels, commonly known as rings, ranked in privilege from 0 – 3; Ring O is the most privileged, 3 the least. In an ideal, secure world, Ring 0 can modify processes running in higher rings, but not vice versa. The processor will block Ring 3, or Userland, applications from accessing kernel memory in Ring 0. Userland applications are not even allowed to see details regarding memory, the allocations, or the address space because all of those could leak critical details about the system and compromise security.
How does Meltdown work?
Modern CPU’s also have a technology called Speculative Execution, which occurs when a processor anticipates what the next few instructions are supposed to accomplish. It then breaks them into smaller instructions and executes them in a (potentially) different order than the program intended. Most instructions are computationally independent, so the order in which they’re run should not matter. In some instances, a speculatively-executed instruction will reference the CPU’s memory cache, instead of the memory space referenced in the instruction, creating a potential side channel that could leak address spaces in Ring 0. Accessing CPU cache is significantly faster than pulling it from the chips, so the shortcut for common values can mean a significant increase in performance at the risk of security.
Basically, if the operations are run out of order, a later instruction that accesses restricted information could be run first, then store that restricted memory in the CPU cache. An earlier instruction that may not have access to that section of memory can now read it clearly in the CPU cache. It breaks down the most fundamental isolation between userland and kernel memory.
How does Spectre work?
Spectre regards multiple vulnerabilities, all of which are completely unprecedented and are a result of the way modern chips are designed. The public won’t have a complete fix until the next generation of chips are released. It is significantly more difficult to exploit, and equally difficult to fix. Spectre is a completely new class of attack, and no one is certain of the extent of the full security consequences. As well, defending against Spectre is not fully understood. Software and CPU microcode can only fix so much, and are only band-aid solutions at present.
What does this mean?
The vulnerabilities require local access to the machines, meaning an attacker must already have access and privilege to execute code on the machine.
These are information disclosure vulnerabilities, meaning that memory cannot be modified, only read. They cannot force code execution and cannot overwrite Ring 0 memory.
The main concern is shared hosting environments, where multiple users are hosting multiple virtual machines on shared hardware. Breaking out of virtualized machines and attacking the underlying hypervisor has always been difficult, but these vulnerabilities may make that task easier. One attacker with an account on shared server hardware could collect information on other users, and over time build a better profile for a future attack. Cloud service providers like Amazon, Microsoft, and Google have updated their hosting servers’ underlying OS and are monitoring the performance degradation (to date it has been extremely minimal).
How can I guard against Meltdown and Spectre?
As Tom Petty wrote, “The waiting is the hardest part.” And that’s what you’ll have to do—wait for patches for all other devices to roll out. These vulnerabilities were supposed to be published later this year, but were leaked early. All vendors are in the process of rolling out updates to mitigate these vulnerabilities. However, not all have had time to fully test these updates. Just make sure—as in make certain!—that all users keep their devices up-to-date with the latest OS and security patches.
GDT Consulting Engineer Nate Atkinson delivers, as part of the GDT DevOps team’s weekly Lunch & Learn series, a great, basic overview on web security, including some fundamentals, such as privacy, authentication, and integrity. He discusses the importance of data security, both at rest and in flight, and common strategies for managing secure data. Also included: information on how to deal with the sometimes tricky issue of certificate management, full drive encryption, file system & block layer encryption, application layer encryption, and ways to secure data in transit: VPN, SSL/TLS, application layer encryption and encrypted messaging.
GDT’s Agile Operations Developer Lead Brett Kugler presented, as part of GDT’s weekly Lunch & Learn series, a great, informative presentation on Automation. It’s a hot topic in IT today, and there are many ways teams can incorporate automation in their environment. Brett discusses various automation techniques, including orchestration, provisioning and process. Join Brett and better understand how to build efficiencies throughout your organization.
A draft for a new standard has been created by the Internet Engineering Task Force (IETF). It effectually allows people to avoid the scrutiny of surveillance equipment on their networks to perpetuate secure connections.
Three Cisco employees have provided a working draft for the purposed standard. Devices known as “middleboxes” intercept and decrypt traffic to bolster network security. Typically, these types of devices are used by ISPs, corporations, and other organizations to monitor users. However, due to their inherent function of terminating TLS connections, they also can break application services. The new standard would be known as “ATLS” (Application Layer TLS) because it moves the TLS handshake up to the application layer of the OSI model, by transporting TLS records in HTTP message bodies. This will allow private and secure connections to persist between clients and servers, even if the traffic between them is being intercepted by middleboxes (this works by not trusting said equipment).
Middleboxes are deployed to monitor end-user internet activity, inspect application and system software traffic for threats, along with other tasks. For middleboxes to inspect workstation and networking device traffic they must be configured by system administrators as trusted certificate authorities. This enables them to decrypt TLS/SSL-protected connections like HTTPS. In an ideal world, a logical and centrally controlled topology would allow this to be done with ease. However, in the real world of enterprise networking, you often find more sprawling and convoluted network topologies, some of which defy logic itself. Sprinkle in a BYOD (bring your own device) policy and you have a recipe for complexity.
Keeping employees and their gizmos, mission-critical appliances, servers, etc. connected via middleboxes while managing network or configuration updates and other troubleshooting tasks not only keeps an IT department spinning plates, it may lead to their personnel being overextended. This new IETF proposal would provide a standardized mechanism to securely pass data through middleboxes without the added man-hours and loss of productivity that occur when configuring custom root certificate authorities (and their ancillary tasks). Ultimately the standard will be fully compatible with both past and future version of TLS in an effort to minimize the need for reconfiguration.
What are the Pros?
The purposed ATLS standard cites that it will “avoid introducing TLS protocol handling logic or semantics into the HTTP application layer i.e. TLS protocol knowledge and logic is handled by the TLS stack, HTTP is just a dumb transport.” Besides the financial benefits that would be gained via productivity and efficiency, system administrators may also save a piece of their sanity along with the added bonus of making their job easier. And according to the author’s more technical perks include:
There are several benefits to using a standard TLS software stack to establish an application layer secure communications channel between a client and a service. These include:
no need to define a new cryptographic negotiation and exchange protocol between client and service
automatically benefit from new cipher suites by simply upgrading the TLS software stack
automaticaly benefit from new features, bugfixes, etc. in TLS software stack upgrades.
Essentially the concept is that a client will create two independent TLS connections, one at the transport layer directly with the service, possibly via a middlebox, and one at the application layer. As a fallback, a client could use ATLS only if the transport layer connection is broken down due to middlebox interference. “TLS sessions with multiple clients are tracked through an identifier in JSON messages sent in POST requests, and the approach would result in a new HTTP content type: application/atls+json.”
Finally, it is worth keeping in mind that because the security considerations to this concept are still being worked through the relevant section is listed as “To do”. For more information, you may read the draft published on the IETF’s site.
Unlike Mirai, which downloaded itself onto IoT devices using the default passwords, Reaper uses at least 9 known exploits to compromise the devices. Currently affected manufacturers include AVTECH, NetGear, Linksys, and D-Link, among others.
Both Mirai and Reaper are worms, which means they spread automatically from one device to another, so their calls back to a command and control server can be few and far between. Mirai’s scanning is extremely aggressive, often causing an unintentional Denial of Service attack on small home routers its trying to take control of. Reaper is different in that its’ scans are much less aggressive, and spreads very deliberately. This allows it to add devices to the botnet more stealthily and fly under the radar of security operations personnel looking for suspicious activity.
Hindsight is 20-20
Looking back at the 2016 Mirai attacks, researchers can see all of the telltale signs of an impending attack. Increased communication with unknown IPs, sudden processor usage increase, and unresponsive IoT devices were all signs that could have been used to detect the botnet before it’s attacks on Dyn’s servers. Since Reaper is moving much more slowly, its intentions are harder to guess. We already know that it has enough devices to recreate the 2016 Mirai attacks, with even greater power.
Some theories about the purpose of the Reaper Botnet include a giant distributed proxy network, or Tor endpoints to create more anonymized browsing resources. Some of the signs look like it’s going to mirror the Mirai attack, but other signs are completely new to us. It even lives harmoniously with Mirai on devices that have been compromised by both!
How do I protect my devices?
Almost all of the exploits being used to take over the devices are vulnerabilities discovered in the last 3 months. There is a very good chance that your IoT devices don’t have the updates required to patch those flaws. My advice is to patch often, turn on automatic updates, and check on your devices at least once a week. The Reaper code looks like its being updated, so new vulnerabilities can, and will, be exploited to take over your IoT devices.
For now, all we can do is wait in the calm before the storm.
A massive security flaw in the WPA2 encryption protocol has caused panic within the InfoSec community this week.
How bad is it?
If you own a device that uses WiFi, you’re affected. KRACK, a stylized way to write Key Reinstallation Attack, could allow an attacker within range of a WPA2 protected network to intercept traffic between a client and the access point. In some cases it even allows the attacker to forge and inject packets.
It is important to note that this is not a hardware problem. The weakness exists in the WPA2 protocol itself, so any correct implementation of WPA2 is affected. To prevent an attack, users have to update the firmware/software on their WiFi devices as soon as a patch is available. Luckily, most manufacturers released patches within 24 hours of the vulnerability being reported, and the Proof of Concept code to take advantage of the vulnerability has not yet been released.
HOW DOES THIS ATTACK WORK?
Whenever you connect to a WiFi access point, 4 messages are exchanged between your device and the router.
The access point sends an unencrypted message to the client.
The client generates a key and sends back its own random value generated using the information in message 1.
The access point generates an encryption key and sends back a verification code
The client sends back an acknowledgment using the encryption key to verify that it is connected.
The KRACK attack takes place in between message 3 and 4. Since the access point is continuously looking for the acknowledgment message, if it doesn’t hear back from the client in a set amount of time (usually 60 seconds), it re-transmits an exact copy of message 3. If the client receives message 3 again, it resets a NOnce counter and re-uses the encryption key, even if it is the same. The WPA2 protocol does not guarantee that an encryption key cannot be reused.
An attacker simply has to listen for message 3 and they can modify the packet and install their own encryption key (it can even be all zeroes). After the client accepts and installs the key, decrypting their traffic becomes a trivial matter since they already know the encryption key.
HOW DO THEY OBTAIN THE WPA2 HANDSHAKE?
A few months ago I wrote a post about breaking WEP WiFi security. The process becomes similar, they just have to set up a WiFi sniffer and send deauth packets to force all clients to disconnect from the access point. Once the clients attempt to reconnect, the network is flooded with messages for the handshake, collecting those is a simple task.
WHAT IS THE IMPACT OF THIS?
Once an attacker can decrypt all of your traffic, intercepting internet cookies and passwords becomes child’s play. An attacker can intercept TCP SYN packets as well. That allows an attacker to decode TCP transmission sequence numbers and potentially hijack your TCP session. RDP sessions, video streams, secure downloads are all at risk for TCP hijacking.
HOW DO I PROTECT MYSELF?
Update your software, avoid using unfamiliar WiFi, use HTTPS whenever possible, and stick to trusted VPNs until your software is updated. You don’t need to change your WiFi password since those are not at risk for these attacks. Do not temporarily switch to WEP since that is even less secure than WPA2.
GDT is excited to follow the merger of two of our existing technology partners: HPE and Simplivity !
HPE has recently announced that it has purchased SimpliVity which is the company best known for being a specialist in the “hyperconverged infrastructure” space.
Hyperconverged is the term used to describe the systems that combine compute, storage and networking into a single converged system. The market is estimated to be worth $2.4 billion in 2016 and has a growth rate of 25% estimated to reach $6 billion by 2020.
HPE announced that “By bringing together HPE’s best-in-class infrastructure, automation and cloud management software with SimpliVity’s industry leading software-defined data management platform, HPE and its partner ecosystem will deliver the industry’s only “built-for-enterprise” hyperconverged offering.
Having SimpliVity’s innovative technology as part of HPE’s hyperconverged portfolio provides significant additional benefits to customers, including:
Built-in enterprise data protection and resiliency that simplifies backup and enables customers to more quickly restore operations.
Enterprise storage utilization and virtual machine (VM) efficiency that helps customers control cost and performance.
Always-on compression and de-duplication that guarantees 90 percent capacity savings across storage and backup.
Policy-based VM-centric management that simplifies operations and enables data mobility, making development teams and end-users more productive.
One of SimpliVity’s flagship product is OmniCube – a fully integrated hyperconverged infrastructure appliance. It converges 8 to 12 core data center functions into 2U building blocks on enterprise-grade servers. OmniCube folds in the hypervisor, compute, storage, backup, replication, deduplication, WAN optimization, and cloud gateway.
This transaction expands HPE’s software-defined capability and fits squarely within our strategy to make Hybrid IT simple for customers,” said Meg Whitman, President and CEO, Hewlett Packard Enterprise. “More and more customers are looking for solutions that bring them secure, highly resilient, on-premises infrastructure at cloud economics. That’s exactly where we’re focused.”
In 2006 New York times ran a headline “The YouTube Election”. It was an article about how Senator Allen was caught on tape referring to a college with a racial slur. The recording made its way to YouTube which in turn made it to the front page of the Washing Post and then TV. This resulted in the downgrade of the Senator as a leading contender for the 2008 Republication presidential nomination.
In 2014 it was the “Facebook election”, 2015 it was the “Live Stream” election so it seems like the run to every election is based on some technology platform. As America is heading to the voting booth, what are the technology platforms that shaped the way Americans will vote?
All of them.
And the reason is a term that that technologists have been waving around for several years – BIG DATA. As the world is becoming much more reliant on information every action that is taken online, every click, every item searched for and every product purchased is being recorded and logged and this data is used to build a digital profile on each person.
In the science-fiction movie “Minority Report”, psychic-technology was used to arrest criminals before they committed the crime. In 2016, in the real-world, access to big data has allowed political parties to analyze and predict how groups of people are most likely to behave and therefore influence their action by seeding their social media feeds, emails, and phones with information to either change that action or reinforce that action.
Politically speaking, this is not new. In the last elections, the Barack Obama’s election team was using these techniques effectively to win votes. TargetSmart’s CEO, Tom Bonier, which is a company that provides that big data analytics to the national and state Democratic parties, is quoted as saying that “The playing field is a lot more level now than it was then…In 2008, the narrative was that Democrats were so much smarter and so much farther ahead than Republicans in this area, and that’s just not the case at all at this point. It’s a very level playing field in terms of the innovation that’s going on, on both sides of the aisle.”
Deep Root is the company that the Republican party contracts to analyze the big data. David Seawright, director of analytics and product innovation at Deep Root refers to the data as “weaponized” or “actionize” as it can arm the GOP with the right information at the right time so that the right message or action can be taken and shown to the right person.
This is what the election is boiling down to – One on One targeting.
Political Action Committees are no longer restricted by how much money they can spend to market themselves. Therefore, with the aid of the big data analytical tools the Parties are able to spend large sums of money but spent it with pin-point accuracy. Now it’s up to the marketing and messaging specialist to get the right message across.
Politics catching up to Enterprise.
The one arena where Big Data Analysis is not new is in the Enterprise arena which has been recording and analyzing customer data since the inception of commerce. Technologies such as Cisco’s Unified Computing System (UCS) Integrated Infrastructure for Big Data provides businesses with the power to go beyond just storage of large data but to extract deep insights from the data.
“Historically, Enterprises have been collecting vast amounts of data that required loads of powerful hardware to interrogate that data“ says Allen Sulgrove, Director of the Digital Business Unit at GDT “ With systems such as SAP Hana, the hardware requirement is now significantly less and so more companies are able to extract valuable insights from the data.”
Mr. Sulgrove explains that businesses are now collecting and analyzing sentiment data which is unstructured data from systems such as social media, keywords searches and from Amazon reviews. This data is used to better tailor the business offering.
The analysis of information enables faster reactions, smarter decisions. Analytics provides insights into patterns that improve management and operational control while boosting productivity and driving better business outcomes toward improvements in a businesses’ bottom line.
There’s just not much worse than dropping an unprotected phone.
The seconds that separate the initial phone fumble from its final collision with earth are harrowing to say the least. There’s the ill-fated swipe to regain possession, followed by the hacky sack-style kick attempt, and, ultimately, should all of that fail, the final open-palmed, arms outstretched miming of where we wish the thing had landed.
Recently, however, McFadden explained that he didn’t injure himself diving for a loose phone. Rather, he slipped on wet cement near a friend’s pool – iPhone in hand. Perhaps his focus on maintaining possession of his phone played a role in the nature of his injury, but the phone, itself, was not the cause.
With that, McFadden – who is expected to be back on the field at the start of the season next month – avoided Network World’s next list of real-life ways people have been hurt using their phones. These other 25 people did not share his fate.