July 11, 2023
At a Glance
- Claude 2 is a large language model with enhanced abilities capable of analyzing hundreds of pages of text or hours of audio.
- Claude 2 outperforms its predecessor and promises longer, safer and more intelligent responses.
Anthropic, the Google-backed AI startup founded by former OpenAI engineers, has unveiled the next generation of its flagship large language model Claude.
Claude 2 boasts improved performance and longer responses, according to a company blog post. For the first time, Anthropic released a public-facing beta website for trying out the model, although it is only for users in the U.S. and U.K. and access has been patchy due to “capacity constraints." It will become more globally available in the coming months.
Claude 2 builds upon the latest update to Claude, which in May expanded its ability to analyze text from 9,000 tokens to 100,000 tokens - translating to about 75,000 words or six hours of transcribed audio - in each prompt. That means it can do business or technical analyses of long documents such as financial statements or research papers. It can summarize long podcasts. It can also write longer documents.
For example, Claude 2 can "digitize, summarize, and explain" financial statements to analyze a company's strategic risks and opportunities.
Anthropic said other use cases include assessing the pros and cons of legislation, looking through legal documents as well as developer documentation. It also can "rapidly prototype by dropping an entire codebase into the context and intelligently build on or modify it."
Claude 2 improvements
The latest version of Claude has improved coding and math abilities, as well as enhanced reasoning.
Claude 2 scored 76.5% on the multiple-choice section of the American Bar exam, up from 73% for the previous version. On the GRE - a standardized test to enter master's degree programs in the U.S. - Claude achieved scores that are above the 90th percentile for reading and writing, and similar to the median for quantitative reasoning.
Anthropic likened Claude 2 to a “friendly, enthusiastic colleague or personal assistant who can be instructed in natural language to help you with many tasks.”
The AI startup is offering Claude 2’s API for businesses at the same price as Claude 1.3.
Example text generation in the style of a fictional video game character
Safety first, Stanford's diss
Anthropic has pinned its research on imbuing large language models (LLMs) with what it calls ‘constitutional AI,’ where the models were trained to answer adversarial questions using a set of principles as a guide - so outputs will be less harmful. Although no model is exempt from circumvention, Anthropic said it has employed “a variety of safety techniques” to improve its outputs including red-teaming, or employing staff to exploit gaps within the model’s security architecture.
Stay updated. Subscribe to the AI Business newsletter
Anthropic researchers have also been experimenting with ‘moral self-correction’ – the idea that language models trained with reinforcement learning from human feedback (RLHF) can police and correct itself to avoid generating harmful outputs if instructed to do so.
“We've been iterating to improve the underlying safety of Claude 2 so that it is more harmless and harder to prompt to produce offensive or dangerous output,” Anthropic said.
However, a recent study by Stanford researchers ranked the original Claude as one of the AI models least compliant with the EU AI Act.
Anthropic on the rise
Anthropic’s star continues to rise in the AI space. Having only unveiled Claude earlier this year, the startup has found itself being mentioned in the same breath as OpenAI and Google.
In the past four months alone, Anthropic has raised $450 million, securing backing from Google, Salesforce and Zoom.
Read more about:ChatGPT / Generative AI
About the Author(s)
You May Also Like