Nightshade Tool Safeguards Images Against Unauthorized AI Generation

University of Chicago researchers have developed a tool that poisons AI image generators, resulting in ruined output

Ben Wodecki, Jr. Editor

April 8, 2024

3 Min Read
A vial of poison with a skull on it illuminated by a green light
Getty Images

University of Chicago researchers have developed a tool that “poisons” AI image generators so models cannot be trained on pictures without consent.

Nightshade is a tool that enables artists and copyright owners to create "poisoned" versions of their images. These images are identical to the originals, but they contain hidden information that is invisible to the human eye. 

When an AI image generator like Stable Diffusion tries to use the poisoned images in its training data, the added information causes the generated output to be distorted, preventing the image from being used as effective training data.

Users can crop, edit and compress the poisoned images, but the effect would remain. Nightshade even works when taking screenshots of poisoned images or displaying them on a monitor; any resulting image would still distort if processed by an AI model.

Nightshade is designed to be an offensive tool to make it difficult for image generation models to use copyrighted images without permission.

AI Image generation firms like Stability offer opt-out mechanisms that allow artists to prevent AI models from using their images. However, the team behind Nightshade argues that these mechanisms do not work.

“For content owners and creators, few tools can prevent their content from being fed into a generative AI model against their will,” according to a Nightshade blog post. “Opt-out lists have been disregarded by model trainers in the past and can be easily ignored with zero consequences. They are unverifiable and unenforceable and those who violate opt-out lists and do-not-scrape directives can not be identified with high confidence.”

Related:MIT Develops 'Masks' to Protect Images From Manipulation by AI

The team that built Nightshade said it is designed not to disrupt models but rather to increase costs for developers creating image generators using unlicensed content.

Nightshade works similarly to Glaze, another University of Chicago research project aimed at stopping unlicensed training of images by AI model developers.

Nightshade differs from Glaze in that it is designed to prevent images from being scraped without permission, while Glaze is designed to make it harder for AI models to reproduce an artist's unique style.

Nightshade is accessible to anyone for use. The tool can be downloaded from the project’s website and can run on Windows and Mac. Running Nightshade does not require any additional GPU drivers. 

Nightshade joins a growing cohort of tools for artists to protect their work, like MIT’s PhotoGuard which creates “masks” that distort AI-edited images.

Related:Google DeepMind Launches Invisible Watermarks for AI Images

Additional tools such as SynthID from Google DeepMind and Stable Signature from Meta are also available to identify images created by AI. However, these tools are specific to particular image generation models, functioning solely for Imagen and Emu, respectively.

Read more about:

ChatGPT / Generative AI

About the Author

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like