AI Business is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.

Practitioner Portal

Facebook launches Dynabench to test AI in realistic conditions

by Chuck Martin
Article ImageLooking to solve the problem of users messing with its algorithms

Facebook has released a platform for benchmarking the performance of AI models in realistic conditions.

Called Dynabench, it relies on both humans and models to create new data sets aimed at developing better and more flexible AI systems.

Dynabench uses a procedure Facebook calls “dynamic adversarial data collection” to evaluate AI models, and measure how easily AI systems are fooled by humans.

Fool me once

“Dynabench is in essence a scientific experiment to see whether the AI research community can better measure our systems’ capabilities and make faster progress,” researchers Douwe Kiela and Adina Williams said in a blog post.

“Researchers in NLP will readily concede that while we have made good progress, we are far from having machines that can truly understand natural language.

“While models quickly achieve human-level performance on specific NLP benchmarks, we still are far from AI that can understand language at a human level. Static benchmarks have other challenges as well.”

An example cited by the researchers involves models arriving at an incorrect conclusion. To confuse an AI-based system, a human annotator might state: “The tacos are to die for! It stinks I won’t be able to go back there anytime soon,” leading to the model to incorrectly label this as a negative review.

The dynamic benchmarking occurs over multiple rounds, with Dynabench collecting examples that previously fooled the models. This leads to starting a new round with the better models in the loop.

“This cyclical process can be frequently and easily repeated, so that if biases appear over time, Dynabench can be used to identify them and create new examples that test whether the model has overcome them,” researchers explained.

Dynabench will periodically release updated datasets to the community.

Facebook has partnered with researchers from institutions including UNC-Chapel Hill, Stanford, and UCL to generate new and challenging examples of traditional models being fooled.

Practitioner Portal - for AI practitioners

Story

Open source ML framework Streamlit raises $21m, launches sharing platform

10/15/2020

“It’s like we gave the machine learning community a new superpower,” CEO Adrien Treuille tells AI Business

Story

Facebook launches Dynabench to test AI in realistic conditions

9/30/2020

Looking to solve the problem of users messing with its algorithms

Practitioner Portal

EBooks

More EBooks

Upcoming Webinars

Archived Webinars

More Webinars
AI Knowledge Hub

Experts in AI

Partner Perspectives

content from our sponsors

Research Reports

More Research Reports

Infographics

Smart Building AI

Infographics archive

Newsletter Sign Up


Sign Up