Becoming the world's second-largest city to do so
Boston will ban the city from using facial recognition technologies.
The city council unanimously voted for the ban, with some councilors noting that the technology has often proven to be wildly inaccurate for people of color.
The measure will now go to Mayor Marty Walsh with a veto-proof majority.
The ban joins similar moves in San Francisco and Oakland, as well as in other Massachusetts communities like Springfield, Cambridge, Northampton, Brookline, and Somerville.
But there are still some loopholes
The law makes it illegal for city officials, including local police, to "obtain, retain, possess, access, or use" facial recognition technology. It will also be illegal for the city government to enter into contracts that enable the use of facial recognition tech.
There are some exemptions, however, including complying with the National Child Search Assistance Act, and using facial recognition for authentication (such as face unlock on a phone).
Boston officials will also be able to use evidence generated from a facial surveillance system, so long as it was not generated by or at the request of Boston officials.
So, for instance, if the FBI shared suspect lists with the Boston Police Department that was generated with the help of facial recognition, the police could still use that information.
Private corporations will still be able to develop, sell, and use facial recognition technology - as long as they do not do so in conjunction with Boston officials. A BuzzFeed investigation into controversial AI company Clearview earlier this year found several businesses using its facial recognition technology in Boston, including real estate company Boston Properties.
The Boston Police Department claims that it does not currently use facial recognition technology, but noted that the BriefCam software it uses can be upgraded to support facial recognition.
"Representatives from the Boston Police Department... explained that the software license allows the department to shut off the facial recognition aspects and use the software and upgrade for object recognition and video summary which will save time, money, and resources,” the council report states.
Earlier this month, Boston Police Commissioner William Gross said that he was against using the technology until it was shown to be accurate and unbiased. "I didn't forget that I'm African American and I can be misidentified as well," he said in a hearing.
Earlier this month, the US recorded its first known case of false arrest due to a facial recognition error. Police in Detroit arrested Robert Julian-Borchak Williams after feeding grainy footage of a shop thief through a facial recognition system.
"When I look at the picture of the guy, I just see a big black guy. I don't see a resemblance. I don't think he looks like me at all," Williams told NPR.
"[The detective] flips the third page over and says, 'So I guess the computer got it wrong, too.' And I said, 'Well, that's me,' pointing at a picture of my previous driver's license. 'But that guy's not me,'" he said, referring to the footage.
After the case went public - dropped after 30 hours of detention, bail, and a court hearing - the Detroit Police Department claimed that it had enacted new rules. Now, only still photos are permitted in identification systems, and only for solving violent crimes.
Williams was helped by the ACLU, which was also involved in pushing for the Boston ban.
The restrictions come amid widespread Black Lives Matter protests, seeking to highlight institutional racism and police brutality in the US.
In response to the protests, IBM said that it would stop developing general-purpose facial recognition tools, while Amazon and Microsoft paused sales to police forces.
For more on facial recognition technology, and how protests are surveilled, be sure to listen to the latest episode of the AI Business podcast.