Facebook launches Dynabench to test AI in realistic conditions
Looking to solve the problem of users messing with its algorithms
Looking to solve the problem of users messing with its algorithms
Facebook has released a platform for benchmarking the performance of AI models in realistic conditions.
Called Dynabench, it relies on both humans and models to create new data sets aimed at developing better and more flexible AI systems.
Dynabench uses a procedure Facebook calls “dynamic adversarial data collection” to evaluate AI models, and measure how easily AI systems are fooled by humans.
Fool me once
“Dynabench is in essence a scientific experiment to see whether the AI research community can better measure our systems’ capabilities and make faster progress,” researchers Douwe Kiela and Adina Williams said in a blog post.
“Researchers in NLP will readily concede that while we have made good progress, we are far from having machines that can truly understand natural language.
“While models quickly achieve human-level performance on specific NLP benchmarks, we still are far from AI that can understand language at a human level. Static benchmarks have other challenges as well.”
An example cited by the researchers involves models arriving at an incorrect conclusion. To confuse an AI-based system, a human annotator might state: “The tacos are to die for! It stinks I won’t be able to go back there anytime soon,” leading to the model to incorrectly label this as a negative review.
The dynamic benchmarking occurs over multiple rounds, with Dynabench collecting examples that previously fooled the models. This leads to starting a new round with the better models in the loop.
“This cyclical process can be frequently and easily repeated, so that if biases appear over time, Dynabench can be used to identify them and create new examples that test whether the model has overcome them,” researchers explained.
Dynabench will periodically release updated datasets to the community.
Facebook has partnered with researchers from institutions including UNC-Chapel Hill, Stanford, and UCL to generate new and challenging examples of traditional models being fooled.
About the Author
You May Also Like