California Governor Signs 9 Bills Regulating AI-Generated Content

Legislation seeks to address risks stemming from deepfakes

John Yellig

September 26, 2024

3 Min Read
Press conference of governor of the state of California concept. Seal of the governor of the State of California on the tribune with U.S. flag
Getty Images

California Governor Gavin Newsome recently signed nine bills that seek to address the risks of AI-generated content, particularly deepfakes. There are still 29 more pieces of AI-related legislation awaiting his signature or veto before the end of the state’s legislative session on Sept. 30. 

The most controversial of these is SB 1047, which requires AI developers to implement safeguards to reduce the risk that their technology causes or enables a disaster, such as a mass casualty event or a cyberattack. Thus far, Newsom has been coy about his plans for the bill.

Among the recent legislation he has signed are two bills that aim to protect the digital likenesses of actors and performers — both living and dead — from unauthorized use in AI and other digital technologies:

  • AB 2602 requires contracts for the use of AI-generated deepfakes of a performer’s voice or likeness, and the performer must be professionally represented in negotiating the contract.

  • AB 1836 prohibits commercial use of deepfakes of dead performers in films, TV shows, video games, audiobooks, sound recordings and other media without the consent of the performers’ estates.

“We continue to wade through uncharted territory when it comes to how AI and digital media is transforming the entertainment industry, but our North Star has always been to protect workers,” Newsom said. “This legislation ensures the industry can continue thriving while strengthening protections for workers and how their likeness can or cannot be used.”

Related:AI Deepfakes in Political Ads Banned in California

Three bills aim to combat the misuse of AI-generated content, including sexually explicit deepfakes:

  • SB 926 makes it a crime to create and distribute sexually explicit images of a real person that appear authentic when intended to cause emotional distress.

  • SB 981 requires social media platforms to create a mechanism for people to report sexually explicit deepfakes of themselves, and once reported, the platform must temporarily block the material while it investigates and permanently remove it if confirmed.

  • SB 942 requires generative AI systems to place invisible watermarks on the content they produce and to provide free tools to detect them so that AI-generated content can be easily identified.

“Nobody should be threatened by someone on the internet who could deepfake them, especially in sexually explicit ways,” Newsom said. “We’re in an era where digital tools like AI have immense capabilities, but they can also be abused against other people. We’re stepping up to protect Californians.”

Related:California Issues Generative AI Tool Use Guidelines

Four bills aim to combat the use of deepfakes and other deceptive digitally generated or altered content in election campaigns.

  • AB 2655 requires large online platforms to remove or label deceptive or digitally altered political content during specified periods around elections and requires them to provide ways to report it.

  • AB 2839 expands the timeframe in which political entities are prohibited from knowingly distributing an advertisement or other election material with deceptive AI-generated content.

  • AB 2355 requires political ads using AI-generated content to disclose that the material has been altered.

  • AB 2905 requires robocalls to notify recipients if the voice is artificially generated.

“Safeguarding the integrity of elections is essential to democracy, and it’s critical that we ensure AI is not deployed to undermine the public’s trust through disinformation – especially in today’s fraught political climate,” Newsom said. “These measures will help to combat the harmful use of deepfakes in political ads and other content, one of several areas in which the state is being proactive to foster transparent and trustworthy AI.”

About the Author

John Yellig

John Yellig has been a journalist for more than 20 years, writing and editing for a range of publications both in print and online. His primary coverage areas over the years have included criminal justice, politics, government, finance, real estate and technology.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like