Stability AI’s Policy Chief: AI Will Not Destroy Creativity

But machine ‘unlearning’ efforts are underway to make AI models ‘forget’ certain data

Ben Wodecki, Jr. Editor

July 18, 2023

3 Min Read
Image: C-Span

At a Glance

  • Ben Brooks, Stability AI's head of public policy, argues that AI will not hurt creativity.
  • He also said machine 'unlearning' efforts are underway to make AI models forget opt-out and other data it was trained on.
  • Artist Karla Ortiz said opting out is not good enough. Creators should instead choose to opt in.

AI won’t destroy the creative market but will instead empower creators based on historical precedent, according to Stability AI’s head of public policy.

Speaking before the U.S. Senate Subcommittee on Intellectual Property, Ben Brooks said that generative AI tools like Stability’s own Stable Diffusion would expand creative opportunities, just like past technological advancements had done the same.

“Smartphones didn’t destroy photography, and word processors didn’t diminish literature despite radically transforming the economics of creation,” he said in his testimony. “Instead, they gave rise to new demand for services, new markets for content, and new creators. We expect the same will be true of AI tools.”

AI image generation models like Stable Diffusion have caused unease in some creative spaces over concerns about how quickly they can make content and that the data used to build them is potentially copyright infringing.

However, Sen. Chris Coons said that while past tech advancements did not destroy their creative markets, "they impacted them.” One harm is that AI-generated content competes with the human artist whose work its model was trained on.

Last year, many creative platforms banned uploads of AI-generated images, with even a group of online creators filing a lawsuit against Stability and other image generation model makers over claims their content was illicitly used to train the likes of Stable Diffusion.

Brooks said efforts are underway for remedies such as machine ‘unlearning,’ or making the model forget certain data it was trained on without degrading results. That means even after training the AI model, Stability will incorporate opt-out requests from artists and then retrain the model.

Currently, Stability has received opt-out requests for 160 million images, Brooks said.

But Karla Ortiz, who has illustrated several movie posters for Marvel, said opting out is not good enough. Instead, let artists opt in to the training dataset. Moreover, she encouraged AI models to train only on public domain data.

Brooks said Stability would “welcome an ongoing dialogue with the creative community about the fair deployment of these technologies.”

Open models ‘foster competition’

Stable Diffusion, like other image generation models, is open – meaning anyone can access it to generate images.

In his Senate testimony, Brooks said that open models can lower barriers to entry for creators, allowing them to deploy new AI tools or even start their own ventures.

“(Creators) can participate in this new industrial revolution as builders – not just consumers – of AI technology, and they can do so without relying on a handful of firms for critical infrastructure,” Brooks told the Senate subcommittee.

He said Stability builds models to “support and augment our users, not replace them.”

“AI can help to accelerate the creative process. AI tools can help existing creators boost their productivity, experiment with new concepts, and perform complex tasks as part of a wider workflow.”

However, when pressed by lawmakers, Brooks admitted that Stability does not pay for the datasets on the internet “subject to aggregated data collection” that the company uses in the “initial” training of its models, nor does it compensate artists.

Added Ortiz: “I have never been asked, I have never been credited, I have never been compensated one penny − and that’s for the entirety of my work, both personal and professional.”

AI policy must hinge on how model was trained, deployed

Brooks told senators that any prospective legislation on AI should account for how models are trained and how they’re utilized by users.

He contended that since training data is not stored in the models themselves, models like Stable Diffusion are creative tools, not “independent agents.”

“The user provides creative direction by supplying text prompts or reference examples and adjusting other settings. The user ultimately determines how the generated content is shared, displayed, or represented to others downstream,” the policy chief argued.

Brooks added that the U.S. leadership in AI is in part due to its “robust, adaptable” fair-use doctrine that “balances creative rights with open innovation.”

He noted that other jurisdictions, including Singapore, Japan and the EU have begun to look at introducing safe harbors for AI training that “achieve similar effects to fair use.”

Stay updated. Subscribe to the AI Business newsletter.

Read more about:

ChatGPT / Generative AI

About the Author

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like