The system turned to be inaccurate – but its creators haven’t given up on the idea entirely

Louis Stone, Reporter

August 11, 2020

3 Min Read

British police have stopped testing a government-funded artificial intelligence system created to predict gun and knife violence, due to significant failures with its accuracy.

The ‘Most Serious Violence’ program is part of the National Data Analytics Solution (NDAS) project, which has received at least £10 million ($13m) in funding from the Home Office over the last two years.

Before it happens

The MSV system was never used in active policing operations. It was trained on data from crime and custody records of 3.5 million people living in the West Midlands and West Yorkshire, intelligence reports, and the Police National Computer database.

Documents published by the West Midlands’ Police Ethics Committee, and first reported by Wired, reveal that "it has proven unfeasible with data currently available, to identify a point of intervention before a person commits their first MSV offense with a gun or knife, with any degree of precision."

NDAS originally claimed that it produced an accuracy of up to 75 percent, but after the undisclosed data ingestion error was discovered, the true accuracy was found to be closer to 14 to 19 percent for the West Midlands, and 9 to 18 percent for West Yorkshire.

Further tweaks and fixes brought the best-case accuracy up to 25-38 percent for the West Midlands and 36-51 percent for West Yorkshire Police.

The proposal to continue with the system was therefore refused. Beyond the faulty accuracy issues, other ethical concerns were raised about data bias, and the general concept of predictive crime prevention.

While NDAS has stopped working with the police on this specific project, it is believed to be continuing with violence prediction algorithms. It is also training AI to detect slavery, the movement of firearms, and types of organized crime, Wired noted.

In a separate project, the UK police are trialing facial recognition systems in British cities that have proved equally inaccurate. Research by Big Brother Watch found that 93 percent of those stopped during 10 public trials were wrongly identified, while an independent review on behalf of the police by Professor Peter Fussey pegged its accuracy at 19 percent.

An investigation by the UK's Information Commissioner, Elizabeth Denham, found that "the current combination of laws, codes and practices relating to [Live Facial Recognition] will not drive the ethical and legal approach that’s needed to truly manage the risk that this technology presents."

Last year, Metropolitan Police commissioner Cressida Dick warned that police reliance on AI could turn the country into “Orwellian, omniscient police state," unless there were strict rules to prevent abuse.

“We’re now tiptoeing into a world of robotics, AI and machine learning... the next step might be predictive policing,” she said.

“People are starting to get worried about that... particularly because of the potential for bias in the data or the algorithm, [like] live facial recognition software."

Just a year earlier, Chairman of the London Assembly police and crime committee, Steve O'Connell, offered a different approach: Slash the budget of the police oversight body and invest it in thousands of cameras and an AI system to analyze all of the footage.

About the Author(s)

Louis Stone

Reporter

Louis Stone is a freelance reporter covering artificial intelligence, surveillance tech, and international trade issues.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like