Since the start of this year, it has felt like Armageddon within the cyber-security arena. Attacks are coming in thick and fast, with resultant breaches hitting the headlines on a regular basis. Today, complex malware can evade most traditional security solutions, and it may take months before malware is discovered. Meanwhile, valuable information is stolen or critical infrastructures are and open to disruption.
Most solutions attempt to detect and where possible block malware using signatures, behavioural analysis, post-infection scanning for indicators of compromise or other means. These approaches can only detect known threats and attack techniques, not sophisticated zero-day or polymorphic threats.
These sophisticated or targeted attacks that are hardest to detect frequently result in the most serious and costly breaches. With these attacks becoming increasingly more frequent and more advanced, current security solutions just aren’t capable of blocking these targeted, sophisticated threats. At this point, Anti Virus has basically zero utility on a system that is up to date with patches.
> See also: Catastrophe in the cloud: what the AWS hacks could mean for cloud providers
We have seen a real shift in the way hackers infiltrate corporate networks. Rather than targeting externally accessible servers —which are centrally managed and closely monitored—they have moved to attacking endpoint devices such as desktops and laptops running Windows or OS X. Although there is often not much of interest on an individual PC or laptop, once compromised, these devices can serve as a launch pad for advanced persistent threat (APT) campaigns, enabling hackers to spread through the network until servers of interest are identified and exploited and targeted data is exfiltrated.
As hackers get more creative and determined, enterprises end up spending more time and money layering more and more security solutions—antivirus, application whitelisting, host intrusion prevention, web filtering and more. Each layer tries to solve the same problem: protecting vulnerable data and applications. Each layer adds complexity and cost without actually solving the problem. Many layers have the same blind spots and weaknesses, hence the sum of the parts may be little better than each individually. Each layer adds additional code and may increase the attack surface, potentially becoming a target themselves and even making the system more vulnerable rather than less.
Time for change
No matter how many millions of dollars enterprises throw at the problem, IT is constantly fire fighting. Precious time and resources are spent investigating thousands of false positives and attempting to remediate hundreds of infected endpoints that are later found to be infected, while users contend with downtime. Amongst the noise, the worst kinds of attack typically go unnoticed and uninvestigated.
Unlike your everyday viruses, trojans and worms—which are intended to infect large numbers of organisations—advanced threats are highly customised for each attack. All hackers need to do to bypass most signature-based defences (for example, intrusion prevention systems, antivirus and secure web gateways) is change a single byte of code. Doing so alters the threat’s signature. And until signature-based defences are updated, they are unable to detect them..
Regardless of how targeted the attack is, it isi destined to fail if the vulnerability on the target host has been already patched. Unfortunately, IT organisations often can’t keep up with system patching, relegating it to once per month or sometimes once per quarter. When patching occurs infrequently, it opens the door to hackers who design malware to exploit recently disclosed vulnerabilities. True sophisticated attacks will use previously unknown and hence unpatched ‘zero-day’ vulnerabilities, but the cost to the attacker of discovering and weaponizing one of these is considerably greater than targeting recently disclosed vulnerabilities.
In addition to the increasing onslaught of security challenges that originate from the web, email and social media, organisations are faced with the growing trend toward mobility and ‘bring your own device’ (BYOD), which is transforming the way people do business. Users want the freedom to work from home and on the road without having to worry about being compromised. But when they access corporate data on mobile devices using public Wi-Fi hotspots, or download apps from untrusted sources, they are opening the door to hackers. In response to this, enterprises again continue to add more layers of security that are generally cumbersome, ineffective, and impose restrictive policies on users.
Given how targeted and sophisticated cyberattacks have become, and how traditional security defences don’t do an adequate detection job, there has to be a better way to defend endpoints and networks. Perhaps there’s a way to detect cyberattacks and render them harmless in the process?
Today, if we look at commodity systems, we have a huge code base. A Windows PC has tens of millions of lines of code in the kernel and privileged services, and then just countless millions in applications. These are also dynamic, with the downloading of new content from websites and plugins. Trying to secure a code base of this size is futile, an endless game of whack-o-mole patching vulnerabilities as they are discovered.
> See also: The case for desktop virtualisation
Now imagine if for every single task you want to perform on your machine – every document you open or website you visit – you’re in the position where you could unwrap a brand new, pristine laptop for that particular task. And when you’re finished with that task, and want to go elsewhere, you put it down and unwrap a brand new laptop to use for your next task. At this point, you no longer really care about malware: if something bad happens visiting one web site then the problem is contained – there’s no other data on the device to steal, and we’ll be starting with a clean device for the next web site.
Enter micro-virtualization— an innovative approach to endpoint security that uses leading-edge technology to finally give enterprises the upper hand against cyber criminals.
Micro-virtualizalization uses hardware-virtualization technology that is now built into every modern CPU, which places each user task, along with the data and resources associated with it, in a hardware-isolated micro-virtual machine (micro-VM). Each micro-VM has only ‘need-to-know’ access to data, networks, and local hardware devices as necessary to perform the task.
Windows malware can compromise the application and OS running in a micro-VM, but it can’t escape from the micro-VM. The microvisor still protects the enterprise network, the endpoint and the user. Micro-VMs are created and destroyed in milliseconds, discarding malware and ensuring that the system is unaffected. All of this occurs automatically, with minimal impact on the user experience.
The technology leverages the virtualization features built into Intel and AMD CPUs to create hardware-isolated micro-virtual machines (micro-VMs). All micro-VMs are isolated from each other and are granted access only to specific resources as required by the user task, such as files, network services, the clipboard and interaction with users and devices.
> See also: How to build the best hybrid cloud for your business
The primary benefit of a micro-VM is that, if a user opens a file or web page infected with malware, the malware cannot infect the underlying host operating system or other network hosts, as the malware is robustly isolated within the micro-VM. When the application running in the micro-VM is closed, the micro-VM is discarded and the malware permanently deleted.
Because micro-virtualization isolates every user task, it also isolates every component of every threat, no matter how advanced. System resources and the network are not affected by malware because it’s never able to access trusted systems. Micro-virtualization means you no longer have to care about the security of the applications and OS running inside the micro-VM as it is no longer part of your Trusted Code Base – emergency patching is no longer required. In fact, you may chose to intentionally run old versions of e.g. Java to avoid the need to re-test applications with newer versions.
> See also: The software-defined data centre is happening today
While software is running in a micro-VM, the microvisor can monitor its actions and record a black-box flight recorder trace of its execution. Since micro-VMs are created to perform specific tasks e.g. run Word, or run IE, we have an expectation as to what the normal behaviour of that application is. If execution deviates from this it must be as a result of malicious data input, hence enabling the microvisor to signal this and upload the flight recorder trace to the Enterprise’s Security Operations Centre for further automatic analysis, enabling the malware’s modus operandi to be determined. Hence malware is not only defeated, it can be discovered and dissected too, potentially enabling network perimeter defences to be configured to prevent further attacks that would be dangerous to non micro-virtualized systems.
We now live in a world where the question is no longer ‘if’ your network will be compromised, but ‘when.’ We know from Verizon and other research studies that the most common and effective way for hackers to target data of interest is to compromise vulnerable endpoints and use them as launch pads as part of an advanced threat campaign.
If we stand a chance at defeating highly sophisticated and well-funded cybercriminals, we have to think differently. Our traditional network and endpoint security defences simply can’t keep up with today’s sophisticated threats. Micro-virtualization gives enterprises a new approach to tackling a very serious challenge facing every enterprise IT security team. By providing protection against zero-days and polymorphic threats Windows endpoints become radically more secure, closing the door to their use as launch pads for APTs and other advanced threat campaigns.
Sourced from Ian Pratt, co-founder, Bromium