Stop Selling Facial Recognition AI To Police, AI Experts Tell Amazon

Stop Selling Facial Recognition AI To Police, AI Experts Tell Amazon

Ciarán Daly

April 4, 2019

3 Min Read

SAN FRANCISCO - A number of prominent AI academics and engineers have signed an open letter demanding that Amazon halts all sales of Rekognition, its facial recognition API, to law enforcement agencies due to its alleged bias against women and ethnic minorities.

The letter, entitled 'On Recent Research Auditing Commercial Facial Analysis Technology' and released on Wednesday, is comprised of over 55 signatures at the time of writing. Pointing to a 2018 MIT study suggesting that Amazon's Rekognition had an error rate of 31% on the task of gender classification for women of colour, those signing called on government to regulate the technology.

"There are currently no laws in place to audit Rekognition's use, Amazon has not disclosed who the customers are, nor what the error rates are across different intersectional demographics," the letter states. "How can we then ensure that this tool is not improperly being used?"

Related: Google boosts ethical AI credentials with launch of advisory board

Rekognition came under heavy criticism last year following the release of the MIT Study. However, Amazon's General Manager of Artificial Intelligence, Matthew Wood, and VP of Global Public Policy, Michael Punke, leapt to its defense at the time, dismissing the research as 'misleading' and drawing 'false conclusions'. Now, the open letter directly refutes some of the criticisms made by Dr. Wood and Mr. Punke, saying that their response had been 'disappointing' and that they had misrepresented the technical details of the MIT study.

"We hope that the company will thoroughly examine all of its products and question whether they should currently be used by police. Mr. Punke writes that the company supports legislation to ensure that its products are not used in a manner that infringes civil liberties. We call on Amazon to stop selling Rekognition to law enforcement as such legislation and safeguards are not in place."

Explainability and bias in AI datasets are becoming increasing concerns for the public, governments, and businesses. For example, when Microsoft came under similar fire after an earlier MIT study reported similar ethnic biases in their facial recognition software, the firm took measures to improve the accuracy of its facial recognition tool and pushed for legislation in Washington State to increase transparency around the use of facial recognition technologies in public.

Amazon declined to comment on the letter when asked by The NY Times on Wednesday.

Additional reporting: NYTimes

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like