Google Says AI Will Help Defenders More Than Hackers. Here’s How.

Google also open-sourced Magika, its malware detection tool for Gmail, Drive and others

Deborah Yao, Editor

February 16, 2024

4 Min Read
image of a lock

Google took the wraps off an ambitious plan that harnesses AI to turn the cybersecurity tide in favor of defenders, not hackers − even if both sides have access to the technology.

Organizations and individuals today are burdened with what’s called the “Defender’s Dilemma”: Attackers only have to succeed once but defenders have to protect themselves at all times. The volume and complexity of playing defense can be overwhelming.

But Google is arguing that the nature and process of current AI development will give defenders the edge.

“This is our once-in-a-generation moment to change the dynamics of cyberspace for the better — a chance for profound transformation, not incremental gains,” according to Google’s report, “Secure, Empower, Advance: How AI Can Reverse the Defender’s Dilemma.”

AI is rewiring the digital experience and may be even bigger than the internet itself, the report said.

The internet comprises a vast network of interconnected computers that was built to communicate and route information reliably. But it was not built with security at the heart, the report said. Over time, its core stack has become complicated, with layers of software that also bring vulnerabilities.

“Complexity is hard to manage, and unmanaged complexity introduces systemic risks,” according to Google.

Related:Navigating Generative AI's Cybersecurity Challenges

In contrast, AI is being built from the ground up that addresses security concerns. Moreover, it can analyze data at machine speed to help overburdened cybersecurity staff in finding and thwarting attacks. AI also is automating routine security functions and in time develop “self-healing” networks that learn from attacker behavior to block attacks autonomously.

“AI’s ability to reason effectively stems from its ability to learn. Machine learning enables an AI system to improve its performance on a given task without being explicitly programmed for every specific scenario,” the report said.

AI is already making inroads. Google said its VirusTotal malware detection service reported in November that AI identified malicious scripts up to 70% better than traditional methods alone.

Google also said it is open-sourcing Magika, an AI-powered tool to identify file types for malware. It is being used in Gmail, Drive and its VirusTotal team. The company said Magika is 30% more accurate than typical file identification methods and up to 95% higher precision on hard to identify possibly problematic content such as VBA, JavaScript and Powershell.

The company is giving away $2 million in research grants and strategic partnerships in the field of AI-powered security as well.

Related:NIST Creates Cybersecurity Playbook for Generative AI

Can’t attackers use AI, too?

Google acknowledges that hackers have access to AI, too. But it believes that the current approach to AI gives defenders the edge.

“Some commenters are concerned that breakthroughs in this area will exacerbate zero-day exploitation in the wild, but we think the opposite is true: advances in AI-powered vulnerability and exploit discovery will benefit defenders more than attackers,” according to the company.

Here’s why:

Today, the internet connects millions of smaller organizations with little or no capacity for cybersecurity. Breach one or a few, and the network is compromised.

“AI can put a capable security expert in each of them,” Google contends. Over time, AI can merge and automate the feedback loop in how software is developed, deployed, run and managed. This creates an “AI-based digital immune system” that learns from hacking attempts – and shares it in real time across the cloud.

“AI may make attackers better, but the gains will not be nearly as great as those felt by democratizing security expertise for everyone,” the report said.

#2 – Defenders will have much better AI models

How good an AI system is depends on its underlying models; how good models are depends on the quantity and quality of its dataset.

Most cyber attackers do not have access to the high-level datasets of defenders and “none can rival the combined efforts of the cybersecurity community,” according to the company.

Moreover, pooling security-relevant datasets can ensure defenders have access to better models than attackers.

While Google believes defenders today have an advantage, it is “tenuous” since attackers can steal or subvert models.

Defenders must ensure that “our foundational approach to AI safety and security ensure these models cannot be misused,” the report said.

But a big caveat is that regulators must not allow opting out of AI security systems.

The defenders’ AI advantage will be neutralized if regulators allow people to opt out of AI security functions, because attackers can exploit this vulnerability.

Also, regulators must promote instead of ban AI-powered security for critical infrastructure and public sector networks.

Read more about:

ChatGPT / Generative AI

About the Author(s)

Deborah Yao

Editor

Deborah Yao runs the day-to-day operations of AI Business. She is a Stanford grad who has worked at Amazon, Wharton School and Associated Press.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like