AI Business is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.

AI for Good

Microsoft restricts uses of its facial recognition tool

by
 
Article ImageAzure Face users are no longer allowed to scan for emotions, age or race.

Microsoft has unveiled sweeping changes to its use of AI after the company published a new Responsible AI Standard.

The 27-page document outlines the company’s commitments to developing and deploying trustworthy AI.

Among the notable inclusions include commitments to overhaul facial and emotional recognition and neural voice usage. The company, which owns AI outlets such as Nuance and Two Hat Security, said it will no longer allow users of its tech to infer attributes such as age and gender.

Companies seeking to use Microsoft-owned facial recognition technologies such as Azure Face would now have to apply to access them and commit to Microsoft’s AI ethics standards to ensure safe usage.

“Our standard will remain a living document, evolving to address new research, technologies, laws, and learnings from within and outside the company,” said Natasha Crampton, Microsoft’s chief responsible AI officer. “We’re committed to being open, honest, and transparent in our efforts to make meaningful progress.”

The use of responsible and ethical AI is one of the big focal points of the AI industry today. It was among the most discussed topics of conversation at our recent AI Summit London – with speakers from DeepMind, AstraZeneca, Bank of England and IBM all providing expertise on how to approach and apply ethically sound AI.

For Microsoft, responsible AI means putting people at the center of system design decisions and respecting their values, according to the company’s announcement. The new standard requires developers to undertake such activities as impact assessments, data governance and appropriate human oversight.

The tech giant said a group of multidisciplinary researchers, engineers and policy experts spent a year drafting the policy, with plans to continuously update and adapt it in the future to meet upcoming regulatory requirements.

Microsoft's new AI standards aren't the company's first attempt to outline ethical practices. In 2019, an initial version was released with the company admitting to having learned some "important lessons" from its product experiences.

Trending Stories
All Upcoming Events

Upcoming Webinars

More Webinars

Latest Videos

More videos

EBooks

More EBooks

Research Reports

More Research Reports
AI Knowledge Hub

AI for Everything Series

Oge Marques explaining recent developments in AI for Radiology

Author of the forthcoming book, AI for Radiology

AI Knowledge Hub

Newsletter Sign Up


Sign Up