Google DeepMind Launches Invisible Watermarks for AI Images

SynthID tags images with watermarks and detects them, providing confidence levels on whether they're AI-generated

Ben Wodecki, Jr. Editor

August 30, 2023

2 Min Read
While some AI-generated images are easy to spot, Google DeepMind researchers want to make it easier.Google DeepMind

At a Glance

  • Google DeepMind’s new SynthID tool can tag and detect AI-generated images based on invisible watermarks.
  • Google is looking to expand SynthID to other AI models like audio, video and text in the future.

Google DeepMind has launched a tool to identify AI-generated images called SynthID.

Launched in beta at Google Cloud Next 2023 , SynthID technology embeds a digital watermark directly into the pixels of an image. While undetectable to the human eye, the watermark can signify if an image is AI-generated.

The SynthID system can also scan an image for a digital watermark and can assess the likelihood of an image being created by Imagen, Google's text-to-image model. The tool provides confidence levels for interpreting the results of watermark identification. If a digital watermark is detected, part of the image is likely generated by Imagen.

Google DeepMind suggests that a tool like SynthID could be more impactful at image identification than metadata.

"Metadata can be manually removed or even lost when files are edited,” the DeepMind team said in a blog post. “Since SynthID’s watermark is embedded in the pixels of an image, it’s compatible with other image identification approaches that are based on metadata and remains detectable even when metadata is lost.”

SynthID isn’t foolproof – the team behind it warns it could fall foul against "extreme image manipulations.”

Despite the warning, Google DeepMind appears confident SynthID could empower responsible AI image generations, adding that the concept of SynthID could be expanded to other AI models covering audio, video and text.

Related:The Essential List: AI Text-Generation Models and Apps

Just a small number of Google Vertex customers get to try out SynthID via Imagen, its AI image generation model. Google DeepMind and Google Cloud are looking to gather feedback from customers before rolling out SynthID further.

Google DeepMind said that being able to identify AI-generated content is “critical to empowering people with knowledge of when they’re interacting with generated media, and for helping prevent the spread of misinformation.”

New tools emerge to detect deepfakes amid AI image synthesis surge

Defending against image manipulation to create deepfakes is a growing research area for AI. As the popularity of AI image generation tools like Stable Diffusion and Midjourneyrises, so does the potential for misuse.

Such tools have been shown to accept prompts for fake terror attacks and voter conspiracies, according to a study in late July.

While some AI-generated images are easy to spot, researchers want to make it easier. A team from MIT created PhotoGuard, which adds an invisible ‘mask’ that distorts images when put through an AI image editor.

OpenAI, which created AI image generator DALL-E, has pledged to continue researching ways to determine if a piece of audio or visual content is AI-generated, though that pledge came after it shut down its text detection tool due to its poor performance.

Related:MIT Develops 'Masks' to Protect Images From Manipulation by AI

Read more about:

ChatGPT / Generative AI

About the Author(s)

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like