Sponsored By

OpenAI in Talks with CNN, Fox to License Content

OpenAI, facing a copyright infringement lawsuit from The New York Times, is negotiating licenses with CNN, Fox and Time

Ben Wodecki

January 15, 2024

3 Min Read
A computer screen displaying the ChatGPT webpage. OpenAI is moving to secure access to copyrighted content from CNN, Fox and Time.

At a Glance

  • OpenAI continues talks with big-name media outlets for access to content to enhance AI training and avoid legal pitfalls.
  • The company also revises its usage policies to remove references to the military, which it claims makes them ‘more readable.’

OpenAI is continuing to secure access to copyrighted content, with deals in the works with CNN, Fox and Time.

Bloomberg reports that licensing discussions are ongoing, just a week after the ChatGPT makers said it would be “impossible” to create AI models without access to copyrighted content.

OpenAI is reportedly seeking a license to access articles from CNN, which is owned by Warner Bros. Discovery, to train ChatGPT. The company will look to feature CNN news snippets in its products, similar to its deal with Axel Springer for brands like Politico and Business Insider.

The company is holding similar discussions with Fox, with OpenAI also seeking video and image content from both publishers. The likes of News Corp, Gannett and IAC have held conversations with OpenAI about potential licensing of content, according to the New York Times. Guardian News & Media, the parent company of The Guardian, said it too has had discussions with OpenAI which “may now transition into commercial discussions about the use of our journalism to build and power their products.”

According to Bloomberg, OpenAI is also in discussions with News/Media Alliance, the trade group for media outlets to “explore opportunities” and discuss concerns.

OpenAI’s rush to strike deals with publishers comes after the New York Times sued it for copyright infringement. The newspaper alleges that ChatGPT was trained on its copyrighted news and then regurgitates "near-verbatim” content, among other accusations. OpenAI refutes the infringement claims, arguing its use of related content was fair use and that the news outlet specifically crafted prompts that would regurgitate its news stories nearly word-for-word.

Related:OpenAI: ‘Impossible’ to Train Models Without Copyrighted Content

Deals with publishers would allow OpenAI to use its content to train its AI models without fear of being sued. As part of its deal with Axel Springer, content from its brands will feature in ChatGPT responses and include attribution and links to the full articles. OpenAI also enjoys a similar deal with the Associated Press.

OpenAI removes military references

In other OpenAI news, the Microsoft-backed company has changed its usage policies covering its models to remove references to the military.

Previously, the company outlined what it called "disallowed usages” of its models. These included specified applications including the generation of malware, military and warfare, multi-level marketing and plagiarism, among others.

Now, however, OpenAI has introduced four broad rules, dubbed ‘universal policies’ that apply to models as well as any OpenAI service, including ChatGPT. They are:

Related:OpenAI Launches GPT Store for Building Custom ChatGPTs

  • Comply with applicable laws

  • Don’t use our service to harm yourself or others

  • Don’t repurpose or distribute output from our services to harm others

  • Respect our safeguards

Terms like ‘military’ and ‘warfare’ do not explicitly appear in any of the new universal policies. The closest reference, under ‘don’t harm yourself or others,’ states that users cannot use OpenAI services to “develop or use weapons, injure others or destroy property, or engage in unauthorized activities that violate the security of any service or system.”

The update, dated Jan. 10, states it was to make the policies “more readable” with added service-specific guidance.

An OpenAI spokesperson told AI Business that "our policy does not allow our tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property. There are, however, national security use cases that align with our mission. For example, we are already working with DARPA to spur the creation of new cybersecurity tools to secure open source software that critical infrastructure and industry depend on. It was not clear whether these beneficial use cases would have been allowed under 'military' in our previous policies. So the goal with our policy update is to provide clarity and the ability to have these discussions."

Read more about:

ChatGPT / Generative AI

About the Author(s)

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like