The Insidious Way Artists Can Thwart AI Training on Their Art

University of Chicago researchers introduced 'Nightshade,' a method that pervasively interferes with the training of AI models.

Sascha Brodsky, Contributor

November 7, 2023

4 Min Read
Robot hand pressing the nuclear launch button
Getty Images

Artists have a new tool to keep their work from being scraped by AI.

A group of researchers from the University of Chicago have published a paper introducing "Nightshade," a data contamination method designed to disrupt the training of AI models. Nightshade introduces tiny alterations to the pixels within a digital artwork.

It is part of a growing battle between human creators and AI models that use their work.

“If AI destroys the financial incentive to further develop human art, many careers will no longer be viable,”  intellectual property attorney Sheldon Brown said in an interview. “That would also be bad for the AI developers because there would be no new material to feed into the model. Providing an incentive to continue to develop human-made art is important. AI models are trained on human-made art. The AI model itself will stagnate from lack of new matter if humans stop making their own art.”

Dogs that look like cats

Text-to-image models using diffusion techniques have become increasingly popular in the last year, spreading quickly into fields including advertising, fashion, and web development. However, many artists have complained that the generative AI models have been trained with their work without proper credit or compensation. They have even sued image generator developers such as Stability AI and Midjourney.

Related:Stability AI Wins Partial Legal Victory Over Infringement Claims

Nightshade might be one way to fight back. The technique takes advantage of an inherent security vulnerability within AI models, allowing for potential tampering with their functionality. It makes subtle alterations to the pixels in digital images, changes that remain undetectable to the human eye. The modifications affect not only the image itself but also any accompanying text or captions, which are crucial for AI systems to recognize and understand the image's content.

The consequences of introducing these infected images into an AI training dataset can lead to the AI misidentifying objects, such as confusing images of hats with cakes or misinterpreting handbags as toasters. The Nightshade attack can spread beyond the initial pictures, infecting tangentially related concepts and content. For instance, if an AI model encounters a poisoned image associated with "fantasy art," it might subsequently struggle to correctly recognize related images like "dragons" or "castles from The Lord of the Rings."

Researchers conducted comprehensive tests on Nightshade, targeting Stable Diffusion's latest models and even an AI model they trained entirely from scratch. When they introduced 50 poisoned images of dogs into Stable Diffusion, requests for images of dogs began yielding peculiar results, including creatures with excessive limbs and exaggerated cartoon-like features. As the number of poisoned samples increased to 300, Stable Diffusion could be manipulated to generate images of dogs that closely resembled cats.

Related:Rage Against the (Text-to-Image AI) Machine

Too good to be true?

Some experts say Nightshade might not live up to its promise. Mikhail Kazdagli, head of AI at Symmetry Systems, said in an interview that similar techniques in the realm of adversarial machine learning have been in development for decades.

“Like many other security defenses, it is susceptible to eventual breaches,” he added. “While it may mark the advent of the first production-ready defense system against GenAI, it will inevitably usher in an era of continual defenses and counter-defenses. The pivotal question is how much added complexity or cost GenAI companies will incur to bypass such defenses.”

The reality is that protecting IP has always been a “game of whack-a-mole,” John Bambenek, principal threat hunter at cybersecurity firm Netenrich, said in an interview. He pointed out that once a good technique is found to stop piracy, someone develops new strategies.

“Pirating movies and media are still a thing long after DMCA (Digital Millennium Copyright Act) was signed into law,” he added. “Theft has been with the human species since the beginning, and it will be with us right up until the moment a meteor wipes us all out.”

The use of pixels and watermarks have long been successful in identifying unlicensed usage of images and artwork, noted Patrick Harr, CEO at SlashNext, an anti-phishing firm, said in an interview.

“Companies like Getty Images and others that rely on licensing revenue will develop a technology that will protect the rights of artists without the drastic step of corrupting the training models of AI,” he added.

Brown said that the most helpful technique to protect intellectual property from AI is to prevent the IP from becoming available to the AI model in the first place.

“That might mean choosing not to publish your artwork online,” he added. “It's not a great long-term solution for obvious reasons, so further innovation is certainly needed for protecting intellectual property.”

What is needed on the legislative side is something similar to the DMCA regulations that came about when people were infringing on property in the early days of the internet, Brown said. The regulations created a new, easier and more effective avenue for people to enforce their IP rights in the digital age.

“In an ideal world, this would be accompanied by an AI detection tool that could identify when another AI is infringing on someone's IP and could submit a removal request similar to the DMCA takedown requests, all on its own,” he added.

Read more about:

ChatGPT / Generative AI

About the Author

Sascha Brodsky

Contributor

Sascha Brodsky is a freelance technology writer based in New York City. His work has been published in The Atlantic, The Guardian, The Los Angeles Times, Reuters, and many other outlets. He graduated from Columbia University's Graduate School of Journalism and its School of International and Public Affairs. 

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like