Court proceedings scheduled to begin in Vermont later this month
Clearview AI has raised $8.6m to continue offering controversial facial recognition software to businesses and federal agencies.
The round comes despite a series of legal challenges and growing concerns over the company’s data handling practices.
Clearview AI spent most of its life operating in the shadows, until it was exposed in an explosive New York Times feature in January.
The company claimed it had scraped more than three billion publicly available images from Facebook, YouTube, Instagram, and other platforms, helping it develop an impressive facial recognition tool with a high accuracy rate.
But the system has not been independently tested, or analyzed for racial bias. Later, reporters discovered that the company told prospective clients its platform had a 75 percent accuracy rate – but that was the number of times it was able to find a match, which was not necessarily correct.
In February, Buzzfeed News published a list of Clearview’s customers – comprising 2,200 law enforcement agencies, companies, and individuals. Among them were US Immigration and Customs Enforcement (ICE), the US Attorney's Office for the Southern District of New York, and Macy’s.
The company has individual customers who work for the FBI, Customs and Border Protection (CBP), Interpol, and countless local police departments.
Contravening claims that Clearview only dealt with law enforcement agencies in the US and Canada, documents showed that the company was selling to corporations, as well as governments in Australia and Saudi Arabia. It also supplied a version of the app to investors, who used the tech at dinner parties to identify other venture capitalists.
Following the revelations, and a subsequent lawsuit, Clearview promised to stop selling its products to corporate clients.
Earlier this month, a judge in Vermont ruled that the state can proceed with a lawsuit against the company, which is alleged to have been collecting photos without consent. “Clearview’s practices are disturbing and offend public policy," Vermont Attorney General T.J. Donovan said at the time.
The ACLU has sued the company in Illinois, using the same state law that previously saw Facebook fined $550m. Over in Europe, the EU's privacy body has said the tech might be illegal, but is yet to make a ruling.
None of this has put off Clearview’s new investors, which remain undisclosed. Previous investors include Kirenaga Partners and controversial billionaire Peter Thiel, who co-founded Palantir, backed Anduril, sits on the board of Facebook, and serves as an unofficial advisor to the President.
Clearview’s SEC filing also reveals two existing board members, Murtaza Akbar of Liberty City Ventures, and Hal Lambert, the founder of Point Bridge Capital.
Lambert served on the Inaugural Committee for President Trump and was the finance chair for the Texas GOP, Buzzfeed News noted. Meanwhile, Point Bridge is responsible for an investment fund for pro-Republican companies, called MAGA ETF.
All of this is set against a backdrop of growing hostility towards facial recognition apps in the hands of law enforcement agencies, and the wider protests against police brutality.
As demonstrations gripped the nation, major tech companies came forward promising to rethink how they approached facial recognition – IBM stopped “general purpose” facial recognition sales, and Amazon and Microsoft paused police sales, among others; some American cities announced facial recognition bans.
Clearview saw this as an opportunity: “While Amazon, Google, and IBM have decided to exit the marketplace, Clearview AI believes in the mission of responsibly used facial recognition to protect children, victims of financial fraud, and other crimes that afflict our communities,” CEO Hoan Ton-That said at the time.
Ton-That also claimed that the company had solved the racial bias problem found in many facial recognition tools. This has not been independently verified, and Clearview has not submitted its models to the National Institute of Standards and Technology for study.
Within algorithms NIST has studied, it found higher rates of false positives for Asian and African American faces, when compared to images of Caucasians. It found even higher rates of inaccuracy on pictures of African American females.
Facial recognition errors have led to at least one arrest – Robert Williams, an African American, was detained in June due to the DataWorks Plus program falsely identifying him as a criminal.