Azure Face users are no longer allowed to scan for emotions, age or race.

Ben Wodecki, Jr. Editor

June 22, 2022

2 Min Read

Azure Face users are no longer allowed to scan for emotions, age or race.

Microsoft has unveiled sweeping changes to its use of AI after the company published a new Responsible AI Standard.

The 27-page document outlines the company’s commitments to developing and deploying trustworthy AI.

Among the notable inclusions include commitments to overhaul facial and emotional recognition and neural voice usage. The company, which owns AI outlets such as Nuance and Two Hat Security, said it will no longer allow users of its tech to infer attributes such as age and gender.

Companies seeking to use Microsoft-owned facial recognition technologies such as Azure Face would now have to apply to access them and commit to Microsoft’s AI ethics standards to ensure safe usage.

“Our standard will remain a living document, evolving to address new research, technologies, laws, and learnings from within and outside the company,” said Natasha Crampton, Microsoft’s chief responsible AI officer. “We’re committed to being open, honest, and transparent in our efforts to make meaningful progress.”

The use of responsible and ethical AI is one of the big focal points of the AI industry today. It was among the most discussed topics of conversation at our recent AI Summit London – with speakers from DeepMind, AstraZeneca, Bank of England and IBM all providing expertise on how to approach and apply ethically sound AI.

For Microsoft, responsible AI means putting people at the center of system design decisions and respecting their values, according to the company’s announcement. The new standard requires developers to undertake such activities as impact assessments, data governance and appropriate human oversight.

The tech giant said a group of multidisciplinary researchers, engineers and policy experts spent a year drafting the policy, with plans to continuously update and adapt it in the future to meet upcoming regulatory requirements.

Microsoft's new AI standards aren't the company's first attempt to outline ethical practices. In 2019, an initial version was released with the company admitting to having learned some "important lessons" from its product experiences.

About the Author(s)

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like