Amazon defends facial recognition tool from bias claims
Amazon defends facial recognition tool from bias claims
February 6, 2019
CAMBRIDGE, MA - Following the publication of a landmark MIT study demonstrating gender and racial bias in their Rekognition AI product, Amazon have denied that their facial recognition tool exhibits any bias.
The study, entitled 'Investigating The Impact Of Publicly Naming Biased Performance Results Of Commercial AI Products', examines performance results from a number of companies offering facial recognition products, including IBM, Microsoft, Amazon, and Kairos. It found that Amazon and Kairos lag significantly behind their competitors, with error rates of 8.66% and 6.60% respectively, and more worryingly, error rates of 31.37% and 22.50% for dark-skinned females. Rekognition, Amazon's platform which is used by law enforcement, even mistook darker-skinned women for men 31 percent of the time. Meanwhile, the algorithm performed with 100% accuracy on white male faces. Comparatively, Microsoft's technology mistook darker-skinned women for men just 1.5 percent of the time.
Joy Buolamwini, co-author of the report and founder of the Algorithmic Justice League, told NBC that this has major implications for law enforcement and civil rights. "Why this matters is that you have facial recognition being used by law enforcement. In the UK, you actually have false positive match rates of over 90% - so you actually have people being misidentified as criminal suspects."
In an interview with The New York Times, Buolamwini said that "one of the things we were trying to explore with the paper was how to galvanize action". She explained that her methodology is designed to harness public pressure and market competition to push companies to fix biases in their software.
Amazon, however, went on the defensive. Dr Matt Wood, General Manager of AI at Amazon Web Services (AWS), criticised the study, saying it did not use the latest version of Rekognition and that the MIT findings did not reflect Amazon's own research, which had used over 12,000 images of men and women of six different ethnicities. He also claimed that Amazon instructed law enforcement to only use facial recognition results when the certainty of the result was listed at 99% or higher.
Buolamwini, however, responded to Dr Wood's criticisms, saying: "Keep in mind that our benchmark is not very challenging. We have profile images of people looking straight into a camera. Real-world conditions are much harder. The main message is to check all systems that analyse human faces for any kind of bias. If you sell one system that has been shown to have bias on human faces, it is doubtful your other face-based products are completely bias-free."
Source: BBC
Additional reporting: NY Times
About the Author
You May Also Like