AI Weighs Heavy On The Scales Of Justice

Jonny Ainslie

August 3, 2018

4 Min Read

Lancastrian Chief Constable Mike Barton is not a man to do things by somebody else’s book. His Durham Constabulary has suffered at the hands of the now well-reported cuts to police funding, now left with only two-thirds of the budget they had in 2010. Fortunately for the citizens of Durham, Mike’s swallowed this hard cheese, viewing it as an opportunity to become “really nimble” and creative. He’s determined that his precinct will not “go with the fashion, we’ll go with what we believe to be the most sensible way to tackle the problem – so we’re problem solvers rather than rule followers.”

This July, Mike’s novel attitude to police work found him presenting evidence to the Law Society’s Commission into the use of algorithms in the justice system. This is legal longhand for AI, and for a crime-stat-crazy bobby is about as exciting as RoboCop in 1989. The Commission are investigating whether law enforcement may need to be regulated in its deployment of AI, whether they might contravene our human rights, or break down our trust in the justice system. So, what has Durham Constabulary been up to?

Well, their HART algorithm, first introduced in 2013 and trained on over 100,000 ‘custody events’, is designed to evaluate the risk of a suspect reoffending. It classifies people into low, medium, or high risk groups to assist human decision makers in whether they should be let out on bail, kept under lock and key, or allowed onto a new ‘Checkpoint’ program.

Checkpoint is designed as an alternative to punitive sentencing, where suspects volunteer to be GPS tagged or undergo 18-36 hours of voluntary work, as well as a long process of lectures and initiatives to offer them alternatives to crime. The decision about who’s eligible for what is obviously an important one, it might mean the difference between rehabilitation with no criminal record – or time behind bars. It’s important to note that these options are not given to serious criminals; robbery, murder, hate crime etc. will always be due for more than a harsh rap on the knuckles.

The primary issue for Law Society President Christina Blacklaws, who is chairing the Commission herself, is the extent to which we can trust computer algorithms with the lives of some of our most vulnerable citizens, weighing their punishment on the tipping scales of big data. There are other forces already using various AI bits and pieces in the UK: Kent Police use the American PredPol software to inform them of crime hotspots using AI to analyse the last 3 years of incident data on a daily basis, returning a list of 180 500ftsq zones to keep a closer eye on. South Wales and the Met are both trialling facial recognition tech to identify wanted criminals at large events like boxing matches, Champions League finals, or the Notting Hill Carnival – but this has met with less success.

The thing is, Durham’s HART really has been beating to the tune of success of late. Barton’s Constabulary has been rated ‘Outstanding’ for its third year running in the latest of its HMICFRS inspection polls. It’s currently the only force in the country to have achieved the top grade, and HM Inspector of Constabulary Matt Parr specifically praised their efforts “seeking to innovate”, supported by software they’ve written themselves, in the face of austerity – because it’s been getting results.

Yet the moral dimension remains, and to tackle it properly there’s one crucial question to ask about the deployment of this sort of software: how does it make its predictions? A similar algorithm used by US Police has been criticised for racism after ‘ethnicity’ was included as a predictor for reoffending. You can’t accuse software of bigotry, but if it’s poorly calibrated to asses criminality using years of historical case data that specifically identifies the race of suspects – then the colour of someone’s skin suddenly counts against them in any new situation, and may be weighted equally to their actual criminal history.

Thankfully, HART avoids such issues, as it doesn’t use ethnicity as a variable, and its predictors of address and gender are combined with 32 other categories in thousands of combinations before a result is obtained. Although, there are questions about the value of including postcode addresses in this web, as it might be in danger of demonising those who simply live in more deprived areas.

There are of course other questions to be asked, and more evidence will have to be seen. We seem to want to hold AI systems to a far greater level of scrutiny and transparency than our fellow human officers, partly because it’s impossible to reprimand a machine if it starts getting it wrong. At least with flawed, human decision-makers the managers and politicians have someone to blame other than their own instructions. But with less and less funding available to officers forced to prioritise despite the rising levels of petty crime, their AI assistants seem to be here to stay. The Commission’s next hearing will be held on the 12thNovember, and will probably deliver a verdict in 2019.

About the Author(s)

Jonny Ainslie

Jonny is an editorial and content executive for AI Business. He is also the editor-in-chief of Journalists On Truth.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like