AI Business is part of the Informa Tech Division of Informa PLC
This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.
The likes of Cristian Canton, head of Facebook’s AI Red Team, and Ashish Jaiman, director of technology operations at Microsoft, were speaking at the Global System for Mobile Communications (GMSA) virtual event on deepfakes and society.
The speakers warned that AI was becoming exceptional at generating content and that time would be needed to unearth a solution to a problem that’s grown so quickly it was already commonplace.
"It's important to keep an eye on how fast this will evolve. Worst-case scenarios should be at the back of everyone's minds – it's the thing that keeps me awake at night," Canton said.
How is a consumer able to tell a potential deepfake? Tim Winchcomb, head of technology strategy at Cambridge Consultants, noted that faces tend to be overtly glossy, and told attendees to be mindful of unnatural eye and mouth movements.
And while he offered the audience some helpful tips for spotting deepfakes, he also reminded that source code for applications that can enable people to superimpose a different face onto video footage is freely available to download on Github.
Programmer Ali Aliev created the popular Avatarify app using open source code from the ‘First Order Motion Model for Image Animation,’ published on the arxiv preprint server last year. First Order Motion, developed by Italian academics and Snap Inc, can create a deepfake from a single image —without any prior training on the target image.
“If you’re not worried about deepfakes, you probably should be,” Winchcomb said.
Ashish Jaiman said that Microsoft and others were working closely on a provenance and authentication solution for video content, which could entail using a signal imprinted into a piece of media: “There’s a high confidence that a platform could flag whether something is wrong if it doesn’t have that signal.”
He emphasized that solving authenticity of images online wouldn’t be down to a single platform. “I hope we can find a solution in less than a decade,” he added.
Rob Meadows, president and CTO at AI Foundation, a company that develops AI-based avatars, suggested that deepfakes could prove a boon for Hollywood, adding that the movie industry and entertainment as a whole "are going to get a whole new set of tools to play with."
Meadows and other speakers referenced the idea of using such techniques to bring to life actors who have long since passed. Long before the event, actor James Dean – who died in 1955 – has been “cast” in a Vietnam War movie, and Peter Cushing was “re-animated” to appear in 2016’s Rogue One: A Star Wars Story.
Winchcomb referenced a less cynical case study—a 55-second spot by charity Malaria No More that featured soccer star David Beckham making a multilingual appeal. The charity used video synthesis technology to make Beckham speak in eight different languages.
And yet, the potential harm of AI-based imagery cannot be ignored – with panelists highlighting recent warning from the FBI that malicious actors, including Russia and China, will “almost certainly” undertake cyber and foreign influence operations in the next 12-18 months using deepfake technology.
“We’re entering into the age of synthetic media and the pernicious impact that could have,” Tamang Ventures director Nina Schick said. “While we haven’t yet seen any major political deepfakes, we’re starting to see some people decry all media that they don’t agree with as being fake.”
“Disinformation won’t go away, but it’s impacted by technology,” she added. “These systems don’t need a lot of training data – this is potentially the most potent tool of visual manipulation, widely available at next to no cost.”
Schick warned that deepfakes and synthetic media will “cause a paradigm change in the way we communicate going forward.”