Why Deepfakes Pose An Unprecedented Threat To Businesses
Why Deepfakes Pose An Unprecedented Threat To Businesses
May 1, 2019
by Henry Ajder
CAMBRIDGE - There is growing concern about the threats that weaponised misinformation, such as fake news, pose to businesses and their brands. This concern is well founded, as reflected in a recent report by New Knowledge, showing 78% of consumers think misinformation damages brand reputation. Whilst this statistic demonstrates businesses need to develop strategies to combat existing forms of misinformation, deepfakes pose an even greater threat to businesses moving forwards.
Deepfakes are highly realistic fake videos, audio recordings, or photos that have typically been generated by deep learning AI. The algorithms behind this AI are fed large amounts of data, such as photos of a person’s face, or audio samples of their voice. From this data, the algorithm is able to generate synthetic audio of that person’s voice saying things they have never said, or videos of that person doing things they have never done. This can then be used to create fake videos that are highly realistic, such as this now infamous deepfake of Obama.
As deepfakes rapidly improve and become more accessible, the threats they pose to businesses are increasingly severe. Here I want to focus on three specific ways deepfakes threaten businesses: Social engineering, market manipulation, and extortion.
Social engineering
Social engineering and fraud are by no means a new threat to businesses, with spam, phishing, and malware routinely targeting employees and business’ IT infrastructure. Most corporate entities have adapted to deal with these threats, employing robust cybersecurity measures and educating employees.
However, deepfakes will provide an unprecedented means of impersonating individuals, contributing to fraud that will target individuals in traditionally ‘secure’ contexts, such as phone calls and video conferences. This could see the creation of highly realistic synthetic voice audio of a CEO requesting the transfer of certain assets, or the synthetic impersonation of a client over Skype, asking for sensitive details on a project.
These uses of deepfakes may seem far-fetched, but as an example by a Buzzfeed journalist demonstrated last year, even primitive synthetic voice audio was able to convince his mother he was speaking to her on the phone. The threat here is derived from an existing assumption that this kind of synthetic impersonation is not possible.
Previous examples of direct audio-visual impersonation scams read like something out of a Hollywood film. One recent case involved Israeli conmen stealing €8m from a businessman by impersonating the French foreign minister over Skype, recreating a fake version of his office, and hiring a makeup expert to disguise them as the minister and his chief of staff. The rarity of these impersonation scams being successful is largely due to the difficulty in convincingly impersonating a VIP.
Deepfakes will to make these kinds of live impersonation fraud worryingly easy, with cybercriminals creating deepfake ‘puppets’ of CEOs and other VIP’s from publicly accessible photos and audio. Similarly, biometric security measures such as the voice and facial recognition used in automated KYC procedures for onboarding bank customers, may be compromised by deepfakes that can almost perfectly replicate these features of an individual.
Related: Time to trust the machines for cybersecurity
Market manipulation
In addition to scams and direct impersonation, deepfakes also have significant potential to enhance market manipulation attacks. This could involve the precise and targeted publication of a deepfake, such as a video of US president Donald Trump promising to impose or lift tariffs on steel imports, that cause a company’s stock price to plummet or soar.
Another good example of how deepfake market manipulation could play out can be seen with the recent erratic behaviour of Paypal co-founder and Tesla CEO Elon Musk. This includes Musk smoking marijuana on a popular live podcast, contributing to Tesla stock dropping 6%, and a securities fraud investigation after he made a joke on Twitter about taking Tesla private at $420 a share.
The public expectation of such volatile behaviour from Musk makes him a prime target for deepfakes that depict him acting in a damaging way, further impacting Tesla’s share price, and corporate reputation. One may argue such scenarios are unrealistic, and would be subject to market rollbacks once the deepfake has been exposed. However, more primitive forms of fake news have already been documented impacting stock markets, and the time required to confidently prove a video or photo is a deepfake may make such rollbacks impossible.
Related: Three questions every company should ask about AI
Extortion
If not in an attempt to manipulate markets, deepfakes will also enhance and likely increase extortion attempts against influential business leaders. This is an already established phenomenon, as most recently seen in a case where Amazon founder Jeff Bezos accused The National Enquirer of threatening to publish naked pictures of him.
As deepfakes proliferate, fake videos or audio of business leaders could be generated quickly and at scale, leveraging existing damaging rumours or fabricating new scenarios. These deepfakes could then be used to extort or blackmail in the same way as the allegedly real photos mentioned above, with the identical threat of significant damage to the individual’s reputation.
What’s more, the rapid speed the deepfake would spread across social media and other digital platforms means the damage could occur before the deepfake can be taken down or exposed as fake. Such attacks have already occurred outside of the corporate sphere, with Indian journalist Rana Ayyub being inserted into deepfake pornography in an attempt to smear her reputation.
In these deepfake extortion scenarios, the authenticity of the defamatory video or photos will become irrelevant, with both having equal potential to cause catastrophic damage to individual and corporate reputation.
Preparation is key
The threats posed by deepfakes are still developing. However, they are not part of a distant future, but rather an imminent reality. Businesses will likely be amongst the first entities targeted with deepfakes, in part due to the significant financial gain weaponising this technology could bring.
It is essential that the corporate world prepares for the inevitable impact of deepfakes, educating employees about this emerging threat, and integrating media authentication tools into their data pipelines. Failing to do so may lead to irreparable damage to corporate reputation, profits, and market valuation.
Henry Ajder is Head of Communications and Research Analysis at Deeptrace, a startup developing deep learning technologies to detect deepfakes.
About the Author
You May Also Like