Deepfake, deep trouble: Zelensky's 'surrender' shows AI's dark side

To counter a deepfake, react fast and be prepared.

Ben Wodecki, Jr. Editor

April 20, 2022

4 Min Read

To counter a deepfake, react fast and be prepared.

In the middle of March, just weeks after Russian troops invaded his country, Ukrainian President Volodymyr Zelensky gave the order to surrender.

About the same time, a video of Russian President Vladimir Putin also surfaced on Twitter, purportedly telling his soldiers to drop their weapons and go home.

Or that’s what the videos circulating online suggest. In fact, both videos were deepfakes designed to sow further chaos in an already confusing situation.

Manipulated media like this is nothing new – scammers have been using deepfakes for a few years now, and the tech behind it is creating increasingly more convincing content, with potentially devastating repercussions.

What is a deepfake?

A deepfake uses deep learning to generate synthetic media. A system will be trained to replace or generate faces, speech and emulate emotions. Commonly, these appear as videos of people with convincing faces or appearances to confuse the viewer.

Some famous examples include the young Luke Skywalker depicted in Season 2 of The Mandalorian or several viral videos of a convincing Tom Cruise.

Family and friends hoping for a birthday video message on Cameo can benefit from the technology too, as the likes of Hour One developing an AI-powered version of The Boss Baby to provide personalized greetings.

And brands such as the YouTube channel Good Mythical Morning are using a Voice-as-a-Service (VaaS) product from Veritone to create and monetize synthetic voices.

To be sure, deepfakes can be used nefariously as well. One example saw scammers use “deep voice technology” to simulate voices to illicitly obtain $35 million from a UAE-based bank.

There are several methods of creating deepfakes. There are app-based examples such as Reface or Zao, which mobile users can simply download for free and replace faces with famous examples.

There's also DeepFaceLab, an open source software offering posted on GitHub. Another tool is Uberduck.ai, a free-to-use synthetic voice tool, which the AI Business team used to recreate the voice of Sir Patrick Stewart to appear on a podcast episode last summer.

There are also more technical tools from vendors such as Nvidia (tacotron2) and Mozilla (TTS).

The inception of the Zelensky deepfake is unknown. However, the developers behind DeepFaceLab claim that more than 95% of deepfake videos are created using their tool.

Not the first … nor the last

The Zelensky video shows an even darker side to this technology, though despite its intent, it was quickly brushed aside as fake news. The Ukrainian president published his own rebuttal on Twitter, vowing to continue the fight.

The same day he posted his response, several social media platforms removed the deepfake for violating their policies on misinformation.

Nathaniel Gleicher, Meta’s head of security, suggested the video first appeared on a “reportedly compromised website” before spreading on social platforms.

According to the Atlantic Council think tank, several Ukrainian websites were hacked, with bad actors posting the video to those sites. The metadata for the deepfake posted to Telegram shows it was created on March 16.

Other politically motivated deepfakes include one of a former Myanmar government minister saying he funnelled gold to ousted leader Aung San Suu Kyi, with the country’s ruling military junta using the footage to accuse her of corruption.

How to stop deepfakes? Be prepared

While the Zelensky video didn't fool anyone, there may come a time soon when the technology becomes so advanced that it just might. Ukraine's military intelligence agency said days after the initial deepfake that more may likely follow – and sow chaos.

Ukraine was ready for such an event, with its president quickly deploying a well-planned strategy to stop the spread of falsities.

Such preparedness and swift responses were the leading suggestions to tackle illicit synthetic media, according to guidelines shared by the Carnegie Endowment for International Peace ahead of the 2020 U.S. elections.

About the Author(s)

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like