Anthropic Will Not Train Its AI on Clients' Data

Anthropic updates its commercial terms of service with a no-training pledge - and it also makes clear the client owns the generated output

Ben Wodecki, Jr. Editor

January 4, 2024

2 Min Read
Anthropic logo

At a Glance

  • Anthropic introduces a no-customer-data-training policy for its AI models in its updated commercial terms of service.
  • Anthropic also said clients own any generated output coming from using its models.
  • The AI startup also promises to indemnify clients from copyright infringement claims, following OpenAI, Google and Microsoft.

Anthropic is pledging not to train its AI models on content from customers of its paid services, according to updates to the Claude developer's commercial terms of service.

The changes, effective January, state that Anthropic’s commercial customers also own all outputs from using its AI models. The Claude developer “does not anticipate obtaining any rights in Customer Content under these Terms.”

“Customers will now enjoy increased protection and peace of mind as they build with Claude, as well as a more streamlined API that is easier to use,” Anthropic said.

The change comes as companies are becoming increasingly conscious of data privacy and intellectual property concerns relating to AI. Rival developers OpenAI, Microsoft and Google all introduced polices in the latter half of 2023 pledging to defend customers facing legal trouble from copyright infringement claims from using their technologies.

Anthropic has now made a similar pledge in its updated commercial terms of services, saying it will defend customers “from any copyright infringement claim made against them for their authorized use of our services or their outputs.”

As part of its legal protection pledge, Anthropic said it will pay for any approved settlements or judgments that result from infringements generated by its AI.

Related:AI Startup Roundup: Anthropic Set to Raise $750 Million

The terms apply to both Claude API customers and those using Claude through Bedrock, Amazon’s generative AI development suite. Amazon is investing $4 billion in Anthropic and became its main cloud provider.

API updates

Anthropic has also updated its API, with users now able to catch errors in prompts. The changes to the Messages API are designed to improve prompt construction and catch errors early in development.

The prompt-focused change is one of many Anthropic plans for its API, with the startup set to launch a more robust function calling option “soon.”

The Claude API also will be more broadly accessible to developers and enterprises to build with the company’s AI solutions, Anthropic said.

In November last year, Anthropic improved its Claude model through the 2.1 update, which offered boosted abilities at analyzing longer documents and reduced hallucinations.

Read more about:

ChatGPT / Generative AI

About the Author(s)

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like