Intel Labs: Key to Catching Deepfake Videos Is to Be Human
FakeCatcher detects in real time, yields results in ‘milliseconds’
Intel Labs is launching a deepfake detection system that works in real time and uses a novel approach: By spotting what makes living beings unique.
Deepfakes are images, videos or speech of a person that is altered to resemble someone else using deep learning. Deepfakes are usually deployed for malicious purposes, such as political misinformation or impersonation. Famous deepfakes include one of Ukrainian President Volodymyr Zelensky supposedly surrendering to Russia.
Deepfakes are getting more sophisticated and harder to spot. But Intel believes it has found a better way to do so, at least for videos – in real time and in milliseconds. Other detection systems take hours to analyze deepfakes, Intel said.
Called FakeCatcher, Intel’s system looks for color changes in a person’s veins as a tell-tale sign of life. As the heart pumps blood, the veins exhibit subtle changes in shading as a tell-tale sign of real life.
“What makes us human? The answer is in our blood,” said Ilke Demir, senior staff research scientist at Intel Labs.
This vein color change, called photoplethysmography (PPG), is mapped. A deep learning model trained on PPG is applied to spot deepfakes. FakeCatcher also uses eye gaze-based detection.
Intel said the system, which falls under its Responsible AI auspices, has a 96% accuracy rate and can stream 72 concurrent steams on its third-gen Xeon Scalable chip.
FakeCatcher uses Intel’s hardware and software run on a server that interfaces with a web-based platform. Intel teams also used OpenVino to run AI models for face detection, deployed optimized computer vision blocks and tapped OpenCV to process images and videos in real time. The Open Visual Cloud project provided an integrated software stack.
While Intel’s work is aimed at malicious deepfakes, Demir acknowledged that there are “responsible” deepfakes – when the technology is used in digital realms to mask identities to evade facial recognition.
Which one is a deepfake?
Intel told AI Business that it has received interest from potential users of FakeCatcher such as social media platforms, global news organizations and nonprofits.
"Social media platforms could leverage the technology to prevent users from uploading harmful deepfake videos. Global news organizations could use the detector to avoid inadvertently amplifying manipulated videos. And nonprofit organizations could employ the platform to democratize detection of deepfakes for everyone," the company said.
The chipmaker also will be deploying FakeCatcher in the workflows of clients, which may choose to share it with consumers. FakeCatcher is based on the platform-agnostic Open Visual Cloud. Intel said it is open to providing support.
About the Author
You May Also Like