Singapore Launches AI Governance Testing Toolkit
AI Verify gives businesses the tools to check their models for bias
The Singaporean government has launched AI Verify, an AI governance testing framework and toolkit to ensure systems meet their declared performance benchmarks.
AI Verify is designed to encourage transparency in AI systems and includes testing frameworks and a software toolkit to conduct technical tests.
It covers major areas of concern for AI systems, including understanding how models reach decisions; management and oversight; and ensuring that the use of AI does not unintentionally discriminate.
Available as a Minimum Viable Product (MVP), brands can use AI Verify to validate what their AI systems can do and what actions have been taken to lessen the risks their systems pose.
AI Verify was launched in a pilot phase last May by the country’s Infocomm Media Development Authority (IMDA), a statutory board that sits under Singapore’s Ministry of Communications and Information.
According to the authorities behind it, AI Verify can be easily deployed in either developer or user environments.
It is important to note, however, that AI Verify doesn’t define ethical standards or state whether a model passes or fails. Instead, it’s an additional external way for businesses to test the models they create.
About the Author
You May Also Like