Time To Trust The Machines For Cybersecurity

Ciarán Daly

February 19, 2019

5 Min Read

by Annu Singh

Artificial intelligence has come a long way in recent decades. There are scant similarities between the dystopian sci-fi of 2001: A Space Odyssey and today’s Siri and Alexa.

However, beneath the glossy consumer veneer, AI is being used to do some great work – be it autonomous vehicles, healthcare solutions or cyber security. When it comes to cyber security, AI has the potential to enjoy a symbiotic relationship with humans, helping us analyse complex systems and fortify security. This being said, we are not yet making the most of AI due to issues around trust and control.

The case for AI

Currently, machine learning is used as a secondary aid to help human analysts make sense of large amounts of data to identify threats and mitigate vulnerabilities. As a result, decision-making is still limited by human speed.

However, thanks to a convergence of factors — including lower costs for general purpose graphics processing units (GPUs), a rise in big data analytics, and advancements in deep learning algorithms — data is now much more pervasive and accessible. Feeding data-hungry machine learning models so they can train and make decisions is now more cost effective than ever before. Organisations can realise significant gains in security operations and monitoring if they embrace autonomy and trust machine learning to make actionable decisions at machine speed.

Machine learning, the ability to process and learn from data without human intervention, can be broken down into three types:

  • Supervised learning, wherein the data is labelled, and the algorithm attempts to learn a function like predicting the value of a house. Supervised learning uses regression analysis and other similar techniques.

  • Unsupervised learning, wherein the data is not labelled, and the algorithm attempts to recognise patterns in the data. Unsupervised learning uses techniques like clustering and pattern recognition.

  • Reinforcement learning, also termed semi-supervised learning, is experience-based, and some feedback is used to train the model. It is used for multi-step decision problems rather than single-step, yes-or-no problems. Although this type of learning does not need labelled data, it does need feedback to help the algorithm improve the accuracy of its decision-making.

In cyber security, machine learning can be applied to malware detection and other scenarios. It can be superior to signature-based techniques used in the past, such as insider threat and anomaly detection (advanced behaviour analysis), or botnet mitigation and authentication (fingerprint and facial recognition).

Though most of the cybersecurity models are already trained and deployed, humans continue to make all decisions at human speed largely because these systems are currently unable to adapt to changes in the dynamic world they operate in. But by using AI, we can now design, train and test intelligent cyber systems that are more robust, adaptive and responsive — and can be given the autonomy to outmanoeuvre adversaries by reacting at machine speeds.

AI-based simulation of the security process can use chance-based simulation to generate what-if scenarios that can help cyber experts avoid costly security breach issues, while speeding time to detect, detest, deflect, prioritise and mitigate the threats and associated risks. Cybersecurity simulation can automate the search for new threats. Though machine learning-based approaches for control in the cyber domain are difficult and risky, their benefits can be far reaching. By 2023, research investment in machine learning is expected to reach $6 billion.

Policing policies

Cyber security systems are mostly rule/policy-based systems. Experience-based reinforced machine learning can be applied to sequential decision problem solving where several steps need to be performed to reach from goal A to goal B and can be defined in a policy. The system can learn the policy by optimising its current world model based on feedback to its experiences. These systems can be autonomous and self-adaptive and can change as the world around them changes, learning by observing the operative environment. This type of machine learning has three key requirements:

  1. Observation space defines the environment in which the systems operate.

  2. Actions define the activities that the algorithm can use in pursuit of the goals.

  3. Reward signals are the feedback on how the system has performed. These can be instantaneous and go all the way back to the beginning of an event so that the system can understand what action led to good or bad behaviour.

Cyber security problems can be simulated at low risk and for relatively little expense. The rewards are quickly evident in threat detection, for instance, where a true positive or false negative can be quickly and easily given as feedback to the machine. AI can be made to act adversarially in a controlled environment in order to explore out-of-the-box solutions that may be overlooked in standard modes.

But defining a problem to a machine can be challenging. We can over-specify a problem based on a single instance, making it difficult to generalise, or we can underspecify, with insufficient detail to reach any conclusion. The system requires both extensive domain expertise and the ability to generalise the knowledge to be able to apply information to a wider scenario.

However, these challenges are not as much of an issue as the issue of trust in autonomy. Currently it can be a challenge for staff to put their faith AI. But as algorithms improve – driven by advancing technology and more mature learning systems, their experience enriched with real-life scenarios and a wider availability of training and test data – trust in the systems is sure to grow. Even today for many larger organisations life without AI based ‘helpers’ is almost unimaginable. As such humans need to learn to collaborate, train and be confident in these new systems in order to avoid becoming yet another cyber crime statistic.

Annu Singh is Continuous Service Improvement Lead at DXC Technology

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like