Google DeepMind Launches Invisible Watermarks for AI ImagesGoogle DeepMind Launches Invisible Watermarks for AI Images
SynthID tags images with watermarks and detects them, providing confidence levels on whether they're AI-generated
August 30, 2023
.jpg?width=850&auto=webp&quality=95&format=jpg&disable=upscale)
At a Glance
- Google DeepMind’s new SynthID tool can tag and detect AI-generated images based on invisible watermarks.
- Google is looking to expand SynthID to other AI models like audio, video and text in the future.
Google DeepMind has launched a tool to identify AI-generated images called SynthID.
Launched in beta at Google Cloud Next 2023 , SynthID technology embeds a digital watermark directly into the pixels of an image. While undetectable to the human eye, the watermark can signify if an image is AI-generated.
The SynthID system can also scan an image for a digital watermark and can assess the likelihood of an image being created by Imagen, Google's text-to-image model. The tool provides confidence levels for interpreting the results of watermark identification. If a digital watermark is detected, part of the image is likely generated by Imagen.
Google DeepMind suggests that a tool like SynthID could be more impactful at image identification than metadata.
"Metadata can be manually removed or even lost when files are edited,” the DeepMind team said in a blog post. “Since SynthID’s watermark is embedded in the pixels of an image, it’s compatible with other image identification approaches that are based on metadata and remains detectable even when metadata is lost.”
SynthID isn’t foolproof – the team behind it warns it could fall foul against "extreme image manipulations.”
Despite the warning, Google DeepMind appears confident SynthID could empower responsible AI image generations, adding that the concept of SynthID could be expanded to other AI models covering audio, video and text.
Just a small number of Google Vertex customers get to try out SynthID via Imagen, its AI image generation model. Google DeepMind and Google Cloud are looking to gather feedback from customers before rolling out SynthID further.
Google DeepMind said that being able to identify AI-generated content is “critical to empowering people with knowledge of when they’re interacting with generated media, and for helping prevent the spread of misinformation.”
New tools emerge to detect deepfakes amid AI image synthesis surge
Defending against image manipulation to create deepfakes is a growing research area for AI. As the popularity of AI image generation tools like Stable Diffusion and Midjourneyrises, so does the potential for misuse.
Such tools have been shown to accept prompts for fake terror attacks and voter conspiracies, according to a study in late July.
While some AI-generated images are easy to spot, researchers want to make it easier. A team from MIT created PhotoGuard, which adds an invisible ‘mask’ that distorts images when put through an AI image editor.
OpenAI, which created AI image generator DALL-E, has pledged to continue researching ways to determine if a piece of audio or visual content is AI-generated, though that pledge came after it shut down its text detection tool due to its poor performance.
Read more about:
ChatGPT / Generative AIAbout the Author(s)
You May Also Like