AI Business is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.

Marketing & Ecommerce

Amazon unveils large language model to improve Alexa

Article Image20-billion parameter model uses less training data to lower ML costs

AI researchers from Amazon have published a new AI model that could improve its voice assistant Alexa.

As outlined in a paper, the Alexa Teacher Model 20B sequence-to-sequence model boasts 20 billion parameters. It supports multiple languages, including Arabic, Hindi, Japanese, Tamil and Spanish.

Unlike OpenAI’s GPT-3, which uses a decoder-only approach, AlexaTM 20B uses an encoder-decoder architecture. This allows it to improve effectiveness in tasks such as text summarization and machine translation compared with rival models.

In terms of capabilities, Amazon’s researchers suggest it outperforms GPT-3 when it comes to linguistic tasks. The model is also capable of few-shot, or low-shot learning — where an AI model uses less training data to reduce ML costs. Depending on the input, AlexaTM 20B can generalize the task to other languages familiar to it.

Using AlexaTM 20B to generate annotated data for a new intent in different languages
Using AlexaTM 20B to generate annotated data for a new intent in different languages

“At Alexa AI, we are moving to the new paradigm of generalizable intelligence, in which models can learn new concepts and transfer knowledge from one language or task to another with minimal human input,” wrote Saleh Soltan, a senior applied scientist with Alexa AI. “Such models allow us to efficiently develop new features and improve Alexa on multiple languages at the same time.”

Amazon’s AI team plans to further evaluate the model by benchmarking it with different public datasets such as MultiATIS, mTOP and MASSIVE.

The researchers also want to make greater use of dialog and user context, experiment with code-switching and examine varying levels of automated speech recognition noise.

“Overall, our results present a compelling case for seq2seq (sequence-to-sequence) models as a powerful alternative to decoder-only models for large-scale language model training,” according to the paper.

Related stories:

7 language models you need to know

A (relatively) simple guide to language models

Trending Stories
All Upcoming Events

Upcoming Webinars

More Webinars

Latest Videos

More videos


More EBooks

Research Reports

More Research Reports
AI Knowledge Hub

Newsletter Sign Up

Sign Up