Amazon unveils large language model to improve Alexa

20-billion parameter uses less training data to lower ML costs

Ben Wodecki, Jr. Editor

August 17, 2022

2 Min Read

20-billion parameter model uses less training data to lower ML costs

AI researchers from Amazon have published a new AI model that could improve its voice assistant Alexa.

As outlined in a paper, the Alexa Teacher Model 20B sequence-to-sequence model boasts 20 billion parameters. It supports multiple languages, including Arabic, Hindi, Japanese, Tamil and Spanish.

Unlike OpenAI’s GPT-3, which uses a decoder-only approach, AlexaTM 20B uses an encoder-decoder architecture. This allows it to improve effectiveness in tasks such as text summarization and machine translation compared with rival models.

In terms of capabilities, Amazon’s researchers suggest it outperforms GPT-3 when it comes to linguistic tasks. The model is also capable of few-shot, or low-shot learning — where an AI model uses less training data to reduce ML costs. Depending on the input, AlexaTM 20B can generalize the task to other languages familiar to it.

Figure 1:8872.jpgUsing AlexaTM 20B to generate annotated data for a new intent in different languages

“At Alexa AI, we are moving to the new paradigm of generalizable intelligence, in which models can learn new concepts and transfer knowledge from one language or task to another with minimal human input,” wrote Saleh Soltan, a senior applied scientist with Alexa AI. “Such models allow us to efficiently develop new features and improve Alexa on multiple languages at the same time.”

Amazon’s AI team plans to further evaluate the model by benchmarking it with different public datasets such as MultiATIS, mTOP and MASSIVE.

The researchers also want to make greater use of dialog and user context, experiment with code-switching and examine varying levels of automated speech recognition noise.

“Overall, our results present a compelling case for seq2seq (sequence-to-sequence) models as a powerful alternative to decoder-only models for large-scale language model training,” according to the paper.

Related stories:

7 language models you need to know

A (relatively) simple guide to language models

About the Author

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like