June 29, 2022

GODEL could be used in a ‘wide range’ of dialog applications.
Microsoft has unveiled a new open source language model to aid dialog design when developing voice assistants.
Dubbed GODEL − Grounded Open Dialogue Language Model − the pre-trained model is designed to enable both task-oriented and social conversations.
According to Microsoft’s developers, GODEL gives dialog agents the ability to generate responses based not just on the context of the conversation, but also on external information that was not part of the dataset used to train it.
The newly unveiled model will help “empower researchers and developers to create dialog agents that are unrestricted in the types of queries they can respond to and the sources of information they can draw from,” according to a Microsoft blog post by several researchers behind the model.
For example, if a user were to inquire about a local restaurant, the model would be able to respond even though that venue may not have been included in the data used to train it, according to the researchers responsible for GODEL.
“Responses would vary depending on whether the grounding information is empty, a snippet of a document, a search result (unstructured text), or information drawn from a database about the restaurant (structured text). However, each response would be appropriate and useful.”
GODEL would also be able to provide details on an event even though all the data available to train it predates that event.
The code is available on GitHub via an MIT license, meaning users are required to preserve copyright and license notices.
Figure 1:
Wide range of dialog applications
Microsoft’s research suggests GODEL could be applied to a "wide range of dialog applications," such as question-answering and grounded chit-chat. (Grounding refers to the sources from which dialog agents retrieve information.)
The computing giant has published three open source versions of GODEL – a base model, as well as large and extra-large versions.
Also published is the code needed to retrain all pre-trained models and to fine-tune models for specific tasks — the CoQA dataset, intended for conversational question-answering; the Wizard of Wikipedia and Wizard of the Internet datasets, aimed at information-seeking chats; and MultiWOZ for task-completion dialogs.
“We hope GODEL helps numerous academic research teams advance the field of conversational AI with innovative dialog models while eliminating the need for significant GPU resources,” the research team wrote. “We plan to continuously improve GODEL and make more models available to the research community.”
About the Author(s)
You May Also Like