Microsoft Blocks Police Use of OpenAI for Facial Recognition Cameras

Facial recognition ban comes after the manufacturer of the Taser released its own LLM-powered solution

Ben Wodecki, Jr. Editor

May 7, 2024

2 Min Read
Closeup image of a grey CCTV security camera with blurred background
Getty Images

Microsoft has updated the list of acceptable uses of its Azure OpenAI Service, barring law enforcement from using it in facial recognition cameras.

The Azure OpenAI Services provides Microsoft’s cloud customers with access to models like GPT-4 Turbo and DALL-E from OpenAI.

Microsoft has now barred U.S. police departments from accessing the OpenAI models for “facial recognition purposes.”

While facial recognition systems rely on visual data inputs, OpenAI models like GPT-4 could be used to augment related processes. For example, a large language model could be used to improve user interfaces for facial recognition systems, generate natural language responses to queries or produce usage reports.

Just last month, Axon Enterprise, which develops technology for law enforcement and the military, including the first-ever Taser, unveiled an AI-powered tool that summarizes audio from an officer’s body camera.

Under Microsoft’s new rules, a law enforcement agency hoping to use OpenAI’s models for such a use case would be barred.

The Code of Conduct page for the Azure OpenAI service states that its models “must not be used for any real-time facial recognition technology on mobile cameras used by any law enforcement globally to attempt to identify individuals in uncontrolled, ‘in the wild’ environments.”

Related:Amazon Web Services extends ban on facial recognition sales to police

The rules extend to officers on patrol using body-worn or dashboard cameras. Before Microsoft’s update, OpenAI’s API users were already barred from using its models for such uses.

Axon’s product launch did not state whether it was using the cloud service.

The ban covers global law enforcement uses, so would also apply to French police who are set to use facial recognition cameras at this summer’s Paris Olympics.

Other use cases barred under the Azure OpenAI Service’s Code of Conduct include applying models to manipulate or deceive people, creating romantic chatbots and social scoring.

Notable other banned use cases include using OpenAI models to identify people based on their physical, biological or behavioral characteristics, which could also have a related application in facial recognition systems.

Read more about:

ChatGPT / Generative AI

About the Author(s)

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like