AI Verify gives businesses the tools to check their models for bias

Ben Wodecki, Jr. Editor

January 16, 2023

1 Min Read

The Singaporean government has launched AI Verify, an AI governance testing framework and toolkit to ensure systems meet their declared performance benchmarks.

AI Verify is designed to encourage transparency in AI systems and includes testing frameworks and a software toolkit to conduct technical tests.

It covers major areas of concern for AI systems, including understanding how models reach decisions; management and oversight; and ensuring that the use of AI does not unintentionally discriminate.

Available as a Minimum Viable Product (MVP), brands can use AI Verify to validate what their AI systems can do and what actions have been taken to lessen the risks their systems pose.

AI Verify was launched in a pilot phase last May by the country’s Infocomm Media Development Authority (IMDA), a statutory board that sits under Singapore’s Ministry of Communications and Information.

According to the authorities behind it, AI Verify can be easily deployed in either developer or user environments.

It is important to note, however, that AI Verify doesn’t define ethical standards or state whether a model passes or fails. Instead, it’s an additional external way for businesses to test the models they create.

About the Author(s)

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like