How AI Deepfakes Threaten Cybersecurity

Businesses need to take a holistic approach to protect themselves

Peter Garraghan, CEO and co-founder at Mindgard

November 11, 2024

3 Min Read
A deepfake facial scan on a laptop
Getty images

When most people think of AI-generated deepfakes, they probably think of videos of politicians or celebrities being manipulated to make it appear as though they said or did something they didn’t. These can be humorous or malicious. When deepfakes are in the news, for instance, it is usually in connection to a political misinformation campaign.

What many people don’t realize, however, is that the malicious use of deepfakes extends well beyond the political realm. Scammers are increasingly adept at using real-time deepfakes to impersonate individuals with certain permissions or clearances, thus granting them access to private documents, sensitive personal data and customer information. This is a serious cybersecurity threat for businesses and one that not enough businesses are protected against.

Combined with the ability to generate audio, images and video of business leaders saying or doing controversial or unethical things, deepfakes can cause serious damage to businesses’ privacy, security, finances and reputations. We are already seeing this happening across the world. 

In February of this year, a finance worker at a multinational firm was successfully fooled into paying out $25 million of company funds by a deepfake impersonating the CFO and other staff members on a video conference call. Video is arguably the most trusted method of digital business communication, so its newfound vulnerability to increasingly elaborate and convincing deepfake-enabled scamming is a serious concern. Even technologically literate people can fall for these fakes and that’s a major problem for businesses everywhere.

Related:How to Use AI for an Ethical and Sustainable Future

The Deepfake Detection Merry-Go-Round

Access to cheap computing power enabled not only the invention of but also the popularization of deepfakes. The cost of generating deepfakes is only decreasing year-on-year, meaning it is only getting easier for scammers to rapidly create deepfakes at scale. This has led to an iterative cat-and-mouse game. Cybersecurity professionals create new detectors and deepfake creators subsequently and quickly iterate on their methods of creating new videos that evade detection.

Deepfake detectors are predominantly powered by AI. Sophisticated AI is required to recognize sophisticated AI since many of the patterns that indicate that content has been manipulated are imperceptible to the human eye. As deepfakes become more common, cybersecurity engineers are better able to leverage the increased number of samples to train more advanced and effective deepfake detectors. This can, however, also increase the risk of attack as more advanced detectors themselves will have intrinsic blind spots within their detection capabilities and exhibit difficulty in being fully explainable. Until the detectors can outwit deepfakes, as well as provide context for their suspicion, businesses need to be vigilant.

Related:Generative AI: Promise or Peril?

How Businesses Can Protect Themselves

Governments and businesses are taking deepfakes more and more seriously. Protecting against this kind of manipulation requires a combination of technological solutions and personnel-based ones. First and foremost, a regular red-teaming process must be in place. Stress-testing deepfake detection systems with the latest deepfake technology is the only way to make sure a given detection system is working properly.

The second essential aspect of defending against deepfakes is educating employees to be skeptical of videos and video conferences with requests that seem too drastic, urgent, or otherwise out of the ordinary. A culture of moderate skepticism is part of security awareness and preparedness alongside solid security protocols. Often the first line of defense is common sense and person-to-person verification. This can save companies millions and their cybersecurity teams hundreds of hours.

Alongside technological solutions, the best defense against malicious AI is common sense. Businesses that take this two-pronged approach will have a better shot at protecting themselves than businesses that don’t. Considering the speed at which deepfakes are evolving, this is nothing short of critical.

About the Author

Peter Garraghan

CEO and co-founder at Mindgard, Mindgard

Peter Garraghan is CEO and CTO of Mindgard, a professor in computer science at Lancaster University and a fellow of the UK Engineering Physical Sciences and Research Council (EPSRC). As an internationally recognized expert in AI security, Peter has dedicated years of scientific and engineering expertise to create bleeding-edge technology to understand and overcome growing threats against AI. He has raised over €11.6 million in research funding and published over 60 scientific papers.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like