Microsoft Blocks Police Use of OpenAI for Facial Recognition Cameras
Facial recognition ban comes after the manufacturer of the Taser released its own LLM-powered solution
Microsoft has updated the list of acceptable uses of its Azure OpenAI Service, barring law enforcement from using it in facial recognition cameras.
The Azure OpenAI Services provides Microsoft’s cloud customers with access to models like GPT-4 Turbo and DALL-E from OpenAI.
Microsoft has now barred U.S. police departments from accessing the OpenAI models for “facial recognition purposes.”
While facial recognition systems rely on visual data inputs, OpenAI models like GPT-4 could be used to augment related processes. For example, a large language model could be used to improve user interfaces for facial recognition systems, generate natural language responses to queries or produce usage reports.
Just last month, Axon Enterprise, which develops technology for law enforcement and the military, including the first-ever Taser, unveiled an AI-powered tool that summarizes audio from an officer’s body camera.
Under Microsoft’s new rules, a law enforcement agency hoping to use OpenAI’s models for such a use case would be barred.
The Code of Conduct page for the Azure OpenAI service states that its models “must not be used for any real-time facial recognition technology on mobile cameras used by any law enforcement globally to attempt to identify individuals in uncontrolled, ‘in the wild’ environments.”
The rules extend to officers on patrol using body-worn or dashboard cameras. Before Microsoft’s update, OpenAI’s API users were already barred from using its models for such uses.
Axon’s product launch did not state whether it was using the cloud service.
The ban covers global law enforcement uses, so would also apply to French police who are set to use facial recognition cameras at this summer’s Paris Olympics.
Other use cases barred under the Azure OpenAI Service’s Code of Conduct include applying models to manipulate or deceive people, creating romantic chatbots and social scoring.
Notable other banned use cases include using OpenAI models to identify people based on their physical, biological or behavioral characteristics, which could also have a related application in facial recognition systems.
Read more about:
ChatGPT / Generative AIAbout the Author
You May Also Like