UK Government Using AI for Benefits, Marriage Checks Despite Biases
AI Safety Summit host is using algorithms to make decisions on marriage licenses and welfare claims
At a Glance
- U.K. agencies are adopting AI technologies such as facial recognition and marriage fraud detection algorithms.
The U.K. government's AI Safety Summit has been and gone with existential fears over the technology's misuse dominating the discourse. However, new reports suggest the government is using AI to decide which citizens get benefits and which marriage licenses get approved.
An investigation by The Guardian found that at least eight government departments and several police forces are using AI to help improve decision-making.
Among the AI offerings being used are algorithms that flag potential sham marriages to the Home Office and systems deployed by the Department for Work and Pensions to flag potential bogus benefit claims.
The country’s public health provider, the National Health Service, was found to be using AI across a multitude of situations, including to try and cut record wait times.
However, The Guardian also found a facial recognition tool used by London’s Metropolitan police force which has been found to make more mistakes recognizing black faces than white faces. Issues relating to race in facial recognition systems are a well-known issue in the AI world. Google’s 2015 image-recognition algorithm auto-tagged pictures of Black people as ‘gorillas.’ And a Facebook tag of Black men in altercations that asked users if they’d like to “keep seeing videos about primates.”
The Home Office was found to have deployed AI in its airport e-gates to check passports as well as a tool to help identify potential sham marriages. However, The Guardian viewed an internal Home Office evaluation that showed the tool disproportionately flagged individuals from Romania, Albania, Bulgaria and Greece.
Some of the issues brought up by The Guardian’s investigation were addressed at the other AI event going on this week, the People’s AI Summit, a protest event that said the government’s Safety Summit ignored current real-world issues with AI in favor of existential doomsday scenarios.
Using algorithms in the British civil service is nothing new, with staff having used less sophisticated systems for years to help make decisions.
In a bid to improve clarity on algorithm use, the Cabinet Office, which is responsible for running the government offices, launched an algorithmic transparency reporting standard that encourages departments to disclose uses of AI. However, disclosures are voluntary.
It hasn’t always been plain sailing for the U.K. government when it comes to using algorithms to make decisions. At the height of the pandemic, student exams were canceled so education authorities turned to a standardization algorithm to determine student grades. The algorithmic system based the grades on unmoderated predictions from teachers.The episode caused an uproar, with many students given grades far below what they expected. Several civil servants were sacked. Helen MacNamara, who served as Deputy Cabinet Secretary in the Cabinet Office from 2020 to 2021, told an inquiry into the government’s handling of the COVID pandemic this week that the education department knew the system
About the Author
You May Also Like