Senate Bill Aims to Combat AI Deepfakes, Protect Content Creators

New legislation would require AI companies to add origin data to generated content and prohibit its removal

Ben Wodecki, Jr. Editor

July 16, 2024

3 Min Read
View of the United States Capitol dome with its flag against a blue sky.
Getty Images

A new bipartisan bill has been introduced in the Senate to establish federal transparency guidelines for AI-generated content, including deepfakes. 

The Content Origin Protection and Integrity from Edited and Deepfaked Media (COPIED) Act would provide content creators with legal protections against unauthorized use of their work in AI systems.

Companies providing generative tools capable of creating images or creative writing would be required to attach provenance information or metadata about a piece of content’s origin, to outputs.

The lawmakers say that forcing AI providers to include such data would let rightsholders detect whether their content was used to generate content without permission.

“These measures give content owners [including] journalists, newspapers, artists, songwriters and others the ability to protect their work and set the terms of use for their content, including compensation,” according to an announcement.

The bill would also introduce a law prohibiting anyone — including search engines and social media companies — from tampering with or altering the data about a piece of content’s origin. The prospective law would also cover the removal or disabling of related data.

The Federal Trade Commission (FTC) and state attorney general would be required to enforce the legislation’s conditions.

Related:New Bill Aims to Safeguard Federal Agency AI Procurement

Content and rightsholders would also retain the right to sue users and platforms who use their content without permission to create deepfakes.

The National Institute of Standards and Technology (NIST) would be required to develop guidelines and standards for digital watermarking and synthetic content detection. The agency would also be instructed to develop cybersecurity measures to prevent bad actors from revising digital watermarks.

The bill was introduced by Senators Martin Heinrich, co-founder and co-chair of the Senate AI Caucus, Maria Cantwell and Marsha Blackburn.

“The bipartisan COPIED Act I introduced with Senator Blackburn and Senator Heinrich, will provide much-needed transparency around AI-generated content,” said Cantwell. “The COPIED Act will also put creators, including local journalists, artists and musicians, back in control of their content with a provenance and watermark process that I think is very much needed.”

The introduction of the COPIED Act comes as rightsholders are becoming more protective of their content from organizations developing generative AI systems.

The New York Times and several smaller newspapers are leading the charge, filing suit against OpenAI over claims of mass copyright infringement stemming from alleged mass scrapping of data for use in training models.

Related:Senate AI Committee Proposes $32B Annual Funding for AI

The COPIED Act is designed to provide content owners with legal protection mechanisms against AI while increasing the transparency of AI-generated content.

The bill has already obtained support from the actors union SAG-AFTRA, the News/Media Alliance and the National Association of Broadcasters.

“The capacity of AI to produce stunningly accurate digital representations of performers poses a real and present threat to the economic and reputational well-being and self-determination of our members,” said Duncan Crabtree-Ireland, SAG-AFTRA’s national executive director and chief negotiator. “[The COPIED Act] would ensure that the tools necessary to make the use of AI technology transparent and traceable to the point of origin will make it possible for victims of the misuse of the technology to identify malicious parties and go after them.”

The Recording Industry Association of America (RIAA), which recently filed lawsuits on behalf of its members against AI music generation platforms over alleged infringements, is also supporting the bill

“Leading tech companies refuse to share basic data about the creation and training of their models as they profit from copying and using unlicensed copyrighted material to generate synthetic recordings that unfairly compete with original works,” said Mitch Glazier, RIAA’s chair and CEO.

Read more about:

ChatGPT / Generative AI

About the Author

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like