As cyber attacks get more diverse in nature and targets, it’s essential that cyber security staff have the right visibility to determine how to solve vulnerabilities accordingly, and AI can help to come up with problems that its human colleagues can’t alone.
“Cyber security resembles a game of chess,” said Greg Day, vice-president and global field CISO at Cybereason — formerly executive at Palo Alto Networks.
“The adversary looks to out-manoeuvre the victim, the victim aims to stop and block the adversary’s attack. Data is the king and the ultimate prize.
“In 1996, an AI chess system, Deep Blue, won its first game against world champion, Garry Kasparov. It’s become clear that AI can both programmatically think broader, faster and further outside the norms, and that’s true of many of its applications in cyber security now too.”
With this in mind, we explore particular use cases for AI in cyber security that are in place today.
Working alongside staff
Day went on to expand on how AI can work alongside cyber security staff in order to keep the organisation secure.
“We all know there aren’t enough cyber security staff in the market, so AI can help to fill the gap,” he said. “Machine learning, a form of AI, can read the input from SoC analysts and transpose it into a database, which becomes ever expanding.
How combining AI and humans can help to tackle cyber fraud
Charlie Roberts, head of business development, UK, Ireland & EU at IDnow, discusses how combining AI and humans can help to tackle cyber fraud. Read here
“The next time the SoC analyst enters similar symptoms they are presented with previous similar cases along with the solutions, based on both statistical analysis and the use of neural nets – reducing the human effort.
“If there’s no previous case, the AI can analyse the characteristics of the incident and suggest which SoC engineers would be the strongest team to solve the problem based on past experiences.
“All of this is effectively a bot, an automated process that combines human knowledge with digital learning to give a more effective hybrid solution.”
According to Imperva research, over 40 per cent of global Internet traffic is made up of bots, with the majority of cyber attack techniques such as account takeover being carried out by these machines. These have also proved prominent within fraud attacks.
Mark Greenwood, chief technical architect at bot management specialists Netacea, delved into the benefits of bots within cyber security, keeping in mind the need to distinguish good from bad.
“Businesses can’t fight automated threats with human responses alone. They must employ AI and machine learning if they’re serious about tackling the ‘bot problem’. Why? Because to truly differentiate between good bots (such as search engine scrapers), bad bots and humans, businesses must use AI and machine learning to build a comprehensive understanding of their website traffic.
“It’s necessary to ingest and analyse a vast amount of data and AI makes that possible, while taking a machine learning approach allows cyber security teams to adapt their technology to a constantly shifting landscape.”
By observing behavioural patterns, businesses can see what does an average user journey would look like, find possibly suspicious activity and act accordingly.
When considering certain aspects of cyber security that can benefit from the technology, Tim Brown, CISO at SolarWinds says that AI can play a role in protecting endpoints. This is becoming ever the more important as the amount of remote devices used for work rises.
“By following best practice advice and staying current with patches and other updates, an organisation can be reactive and protect against threats,” said Brown.
“But AI may give IT and security professionals an advantage against cyber criminals.”
Should CEOs take responsibility for cyber-physical security incidents?
Gartner predicts that 75% of CEOs will be personally liable for cyber-physical security incidents by 2024, as the financial impact of breaches grows. Read here
Brown continued: “Antivirus (AV) versus AI-driven endpoint protection is one such example; AV solutions often work based on signatures, and it’s necessary to keep up with signature definitions to stay protected against the latest threats. This can be a problem if virus definitions fall behind, either because of a failure to update or a lack of knowledge from the AV vendor. If a new, previously unseen ransomware strain is used to attack a business, signature protection won’t be able to catch it.
“AI-driven endpoint protection takes a different tack, by establishing a baseline of behaviour for the endpoint through a repeated training process. If something out of the ordinary occurs, AI can flag it and take action — whether that’s sending a notification to a technician or even reverting to a safe state after a ransomware attack. This provides proactive protection against threats, rather than waiting for signature updates.
“The AI model has proven itself to be more effective than traditional AV. For many of the small/midsize companies an MSP serves, the cost of AI-driven endpoint protection is typically for a small number of devices and therefore should be of less concern. The other thing to consider is how much cleaning up costs after infection — if AI-driven solutions help to avoid potential infection, it can pay for itself by avoiding clean-up costs and in turn, creating higher customer satisfaction.”
Machine learning versus SMS scams
With flexible working between the office and home, and usage of personal devices to complete tasks and collaborate remaining common post-pandemic, it’s important to be wary of scams that are afoot within text messages.
“With malicious actors diversifying their attack vectors during the pandemic and beyond — using Covid-19 as bait in SMS phishing scams — organisations are under a lot of pressure to bolster their defences,” said Brian Foster, chief product officer at ReliaQuest — formerly at MobileIron.
“To protect devices and data from these advanced attacks, the use of machine learning in mobile threat defence (MTD) and other forms of managed threat detection continues to evolve as a highly effective security approach.
“Machine learning models can be trained to instantly identify and protect against potentially harmful activity, including unknown and zero-day threats that other solutions can’t detect in time. Just as important, when machine learning-based MTD is deployed through a unified endpoint management (UEM) platform, it can augment the foundational security provided by UEM to support a layered enterprise mobile security strategy.
How to ensure edge device security
With cyber attacks rising while employees have been working from home, we look at how edge device security can be ensured. Read here
“Machine learning is a powerful, yet unobtrusive, technology that continually monitors application and user behaviour over time so it can identify the difference between normal and abnormal behaviour. Targeted attacks usually produce a very subtle change in the device and most of them are invisible to a human analyst. Sometimes detection is only possible by correlating thousands of device parameters through machine learning.”
Hurdles to overcome
These use cases and more demonstrate the viability of AI and cyber security staff effectively uniting. However, Mike MacIntyre, vice-president of product at Panaseer, believes that the space still has hurdles to overcome in order for this to really come to fruition.
“AI certainly has a lot of promise, but as an industry we must be clear that its currently not a silver bullet that will alleviate all cyber security challenges and address the skills shortage,” said MacIntyre.
“This is because AI is currently just a term applied to a small subset of machine learning techniques. Much of the hype surrounding AI comes from how enterprise security products have adopted the term and the misconception (willful or otherwise) about what constitutes AI.
Blockchain and cyber security: seeing past the hype
Terry Greer-King, vice-president EMEA at SonicWall, discusses looking past the hype when it comes to blockchain and cyber security. Read here
“The algorithms embedded in many security products could, at best, be called narrow, or weak, AI; they perform highly specialised tasks in a single, narrow field and have been trained on large volumes of data, specific to a single domain. This is a far cry from general, or strong, AI, which is a system that can perform any generalised task and answer questions across multiple domains.
“Another key hurdle that is hindering the effectiveness of AI is the problem of data integrity. There is no point deploying an AI product if you can’t get access to the relevant data feeds or aren’t willing to install something on your network. The future for security is data-driven, but we are a long way from AI products following through on the promises of their marketing hype.”
How AI could be a game-changer for data privacy — AI offers multiple benefits to businesses, but it also poses data privacy risks.
Why fraud is getting more sophisticated — Dimitrie Dorgan, senior fraud specialist at Onfido, explores why fraud is getting more sophisticated, and how organisations can prevent it.