Expect Machines To Beat Humans At Hacking

Ciarán Daly

October 8, 2018

5 Min Read

w2.jpg

NEW YORK - From the news of today's Google+ shutdown to the Cambridge Analytica scandal, 2018 has seen some of the biggest data breaches in history - and the threats to personal and enterprise data only seem to be growing. 

For some cybersecurity professionals, AI is the last hope. Machine and deep learning offer sophisticated, powerful techniques for teaching security systems to recognize, combat, and compete with new and totally unrecognized threats. However, they also provide hackers with a new opportunity to combat those cybersecurity mechanisms. 

To find out more, we spoke to Ivan Novikov, CEO and Co-Founder of Wallarm. Ivan Novikov is a white hat security professional with over 12 years of experience in security services and products. He is an inventor of memcached injection, and SSRF exploit class as well as a recipient of bounty awards from Google, Facebook and others. Ivan has recently been a speaker at HITB, Blackhat and other industry events. 

What does AI mean in practice for cybersecurity today?

AI means exactly three things for enterprise cybersecurity today: automating real-time attack/intrusion detection, vulnerability discovery/prioritization, and prompt exploit generation. AI provides adaptive real-time protection with different algorithms that replace signatures and rules. The adaptive, dynamic algorithms are based on machine learning and train themselves, thereby providing more comprehensive security.

Access to machine learning technologies is not reserved for enterprises or security defenders. It would be foolish to assume attackers and intruders would forego such an effective tool as AI to make their exploits better and their attacks more intelligent. It’s especially true today, when it’s so easy to leverage open-source frameworks like TensorFlow, Torch, or Caffe for out of the box machine learning technologies. Not being an attacker, I can speculate what these AI-generated exploits might look like, when we can expect them to materialize, and how we can protect ourselves from these threats.

AI exploits are not only able to find new ways to discover vulnerabilities, but they can also identify which data is more important to a breach. Soon, AI will be available to generate new ways to exploit these issues, whereas in the present day, it mostly just speeds up step-by-step attack scenarios that were initially defined by humans. Signatures and rules-based security solutions versus AI-powered security is like bringing a knife to a gunfight.

What kinds of security challenges can be combated with machine learning in 2018?

Having said this, the main challenge this year lies in expediting and improving virtual patch generation. Upon the discovery and announcement of a new vulnerability, security analysts are tasked to generate instructions/rules/signatures in order to block these types of attacks. Intrusion prevention systems and firewalls need to stop the epidemic before the vulnerability fix is released. The timeframe for virtual patch generation is critical, and currently this hasn’t happened in under 20 hours.

Modern approaches such as recurrent neural networks can generate virtual patches in minutes instead of hours. A bunch of Apache Struts critical issues were discovered last year, as well as this year, thanks to incidents like the Equifax hack and others. In all each case, the virtual patch was generated several hours after the first wild exploits. Security today requires greater efficiency in the feedback loop of vulnerability detection, patch generation, and improving attack mitigation. AI can adapt an exploit for the particular environment faster than a human - by generating exploit variants and running them expeditiously.

Related: Deep Learning Is The Future Of Cybersecurity – Deep Instinct CTO Explains Why

What are the key obstacles to making AI / ML work for cybersecurity at scale?

Right now, experience is the main obstacle. Machine learning approaches are relatively young. Within the last 2-3 years, machine learning approaches have been introduced into production. Before that, machine learning was used in research-only based works, and lacked a substantial amount of experimental proves. Consequently, and at the same time, almost all cybersecurity applications in production are mission-critical. This makes even a single human error risky, but people usually prefer to make a mistake themselves instead of trusting the machine to do so.

Technologies are also developing faster than protection mechanisms, partly because these protection mechanisms are limited to the same technologies that should be protecting it. Machine learning or the AI approach reduces the time to develop new protection mechanisms for evolving technologies.

The second important advantage of AI in security is in Application Security Testing (AST). AI within Application Security Testing addresses security at the production level, we’re able to use AI to automate and create functional test multipliers beyond what humans are manually capable of. AI security offers a paramount advantage for DevOps to develop and deploy secure applications with minimal friction, and for organizations to scale without jeopardizing data.

By using AI security early within the production lifecycle, ‘Fail Fast’ methodology is controlled, and applications are better equip to resist AI enabled hackers. Since machines can play chess and calibrate more efficiently than humans, we can expect them to hack better as well.

Why is the relationship between cloud and AI so important? What are the risks of leveraging these together?

AI takes the cloud from ubiquitous to enterprising. Cloud is the platform for AI because of scalability and performance. At the same time, all the resources should be available immediately when needed, and sometimes, only for a short period of time.

Furthermore, AI offers perception and analysis to help humans and enterprises be more effective, not just more efficient. That's why clouds are strongly linked to AI when it comes to production systems. In the case of research, you are still able to use your desktop with a couple of GPUs. The demands of data, coupled with the expansion of integrations, amasses into a greater need for timely analysis of data. The increase of data needed to be stored, analyzed, and quickly retrieved is beyond human capacity.

Today, we see increased integrations that don’t completely analyze the data to make humans and organizations more effective. The increase of integrations and data stored creates a greater risk surface, or security risk. AI thus has the ability to use relevant data, protect data across integrations, and enable humans to be more effective.

As told to Ciarán Daly

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like