AI Business is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.

AI Practitioner

Facebook launches Dynabench to test AI in realistic conditions

by Chuck Martin
Article ImageLooking to solve the problem of users messing with its algorithms

Facebook has released a platform for benchmarking the performance of AI models in realistic conditions.

Called Dynabench, it relies on both humans and models to create new data sets aimed at developing better and more flexible AI systems.

Dynabench uses a procedure Facebook calls “dynamic adversarial data collection” to evaluate AI models, and measure how easily AI systems are fooled by humans.

Fool me once

“Dynabench is in essence a scientific experiment to see whether the AI research community can better measure our systems’ capabilities and make faster progress,” researchers Douwe Kiela and Adina Williams said in a blog post.

“Researchers in NLP will readily concede that while we have made good progress, we are far from having machines that can truly understand natural language.

“While models quickly achieve human-level performance on specific NLP benchmarks, we still are far from AI that can understand language at a human level. Static benchmarks have other challenges as well.”

An example cited by the researchers involves models arriving at an incorrect conclusion. To confuse an AI-based system, a human annotator might state: “The tacos are to die for! It stinks I won’t be able to go back there anytime soon,” leading to the model to incorrectly label this as a negative review.

The dynamic benchmarking occurs over multiple rounds, with Dynabench collecting examples that previously fooled the models. This leads to starting a new round with the better models in the loop.

“This cyclical process can be frequently and easily repeated, so that if biases appear over time, Dynabench can be used to identify them and create new examples that test whether the model has overcome them,” researchers explained.

Dynabench will periodically release updated datasets to the community.

Facebook has partnered with researchers from institutions including UNC-Chapel Hill, Stanford, and UCL to generate new and challenging examples of traditional models being fooled.

Practitioner Portal - for AI practitioners

Story

Google Brain unveils trillion-parameter AI language model, the largest yet

1/15/2021

Currently more of a research project than a commercial product

Story

Picture Pile gets EU funding to gamify image classification

1/7/2021

Like Tinder, but for science

Practitioner Portal

EBooks

More EBooks

Upcoming Webinars

AI Knowledge Hub

Partner Perspectives

content from our sponsors

Research Reports

More Research Reports

Infographics

Smart Building AI

Infographics archive

Newsletter Sign Up


Sign Up