Amazon Launches New Generation of LLM Foundation ModelsAmazon Launches New Generation of LLM Foundation Models

The new LLMs available include a text-only model that Amazon said delivers the lowest latency response at a very low cost

Heidi Vella, Contributing Writer

December 16, 2024

3 Min Read
Getty Images

Amazon has launched a new generation of foundation models called Amazon Nova that can process text, image and video as prompts. Applications powered by the models enable users to understand videos, charts and documents, or generate videos and other multimedia content, the company said. 

The new LLMs available include Nova Micro, a text-only model that Amazon said delivers the lowest latency response at a very low cost, and Nova Lite, a low-cost multimodal model that is “lightning fast” for processing image, video and text inputs.

In addition, the multimodal model Nova Pro offers the “best combination of accuracy, speed and cost” for a wide range of tasks. Another, Nova Canvas, can be used for video generation. 

The new LLMs are available in Amazon Bedrock, which offers a choice of high-performing foundation models.

On the platform, Nova Micro, Nova Lite and Nova Pro are at least 75% less expensive than the best performing models in their respective intelligence classes, as well as the fastest, according to Amazon.

Another model, Nova Premier, which Amazon said is the most capable of all the multimodal models for complex reasoning tasks and the best teacher for distilling custom models, is planned to be available early next year.  

Its goal, Amazon said, is to use AI to simplify the lives of shoppers, sellers, advertisers, enterprises and “everyone in between.”

Related:Robotics Startup Secures $300M to Develop Foundation Models for Robots

“Inside Amazon, we have about 1,000 generative AI applications in motion, and we’ve had a bird’s-eye view of what application builders are still grappling with,” said Rohit Prasad, senior vice president of Amazon Artificial General Intelligence. 

“Our new Amazon Nova models are intended to help with these challenges for internal and external builders, and provide compelling intelligence and content generation while also delivering meaningful progress on latency, cost-effectiveness, customization, information grounding and agentic capabilities.” 

The new models can be custom fine-tuned, with users own proprietary data. The model could then learn what matters most to the customer from this data (including text, images and videos), after which Amazon Bedrock could train a private fine-tuned model for tailored responses.

In addition, the models support “distillation.” This enables the transfer of specific knowledge from a larger, highly capable “teacher model” to a smaller, more efficient model that is highly accurate, but also faster and cheaper to run, the company said. They can also be used for agentic applications. 

Related:Research Group Demands Global Shutdown of Foundation Model Development

Looking specifically at advertising use cases, the company said brands using Nova creative generation models, Nova Canvas and Nova Reel, on average advertise five times more products and twice as many images per advertised product. 

Amazon said it will introduce two additional Nova models in 2025, including a speech-to-speech model and a native multimodal-to-multimodal or any-to-any modality model.

As part of its efforts to ramp up AI, the company also recently announced Trainium2 chips specifically designed for AI computing demand.

About the Author

Heidi Vella

Contributing Writer, Freelance

Heidi is an experienced freelance journalist and copywriter with over 12 years of experience covering industry, technology and everything in between.

Her specialisms are climate change, decarbonisation and energy transition and she also regularly covers everything from AI and antibiotic resistance to digital transformation. 

Sign Up for the Newsletter
The most up-to-date AI news and insights delivered right to your inbox!

You May Also Like