AI Business is part of the Informa Tech Division of Informa PLC
This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.
The regulator wants comments on the 100-page draft of its AI auditing framework
by Max Smolaks 19 February 2020
The Information Commissioners Office, tasked with regulating access to personal data in the UK, is working on a framework that would be used to evaluate the data protection aspects of artificial intelligence systems.
This is the first piece of guidance developed by the ICO to deal with the risks arising from the use of AI. It contains advice on how to understand data protection law in relation to AI, and recommendations for organizational and technical measures to mitigate the risks AI poses to individuals.
The ICO has published the 100-page draft and launched a public consultation on the framework, looking for feedback from organizations across all sectors and sizes.
Responses can be made via the survey link until 5pm on 1 April 2020.
The ICO's mission is to "uphold information rights in the public interest, promoting openness by public bodies and data privacy for individuals." One of its core responsibilities is enforcing the General Data Protection Regulation (GDPR), a set of rules created by the European Parliament and Council of the European Union that will remain in force until Britain leaves the EU, which is expected to happen at the end of 2020.
The ICO previously said that the data protection regime is not expected to change following Britain's actual exit from the EU - the "default position" is that GDPR will be brought into UK law as an exact copy.
Which means personal data protection remains high on the agenda, and with machine learning applications going mainstream, the regulator is now defining what it expects from AI projects that involve personal data.
The guidance document provides a methodology for auditing AI applications and deals with the question of accountability - something that's a hot topic in the industry.
It is aimed at both technology specialists developing AI systems, and risk specialists whose organizations use AI systems.
You can read the full text here.
It could be a coincidence, but the news comes on the day the European Commission launched its whitepaper on artificial intelligence in the EU. The 27-page document outlines proposals for new rules and tests, including those around legal liability for tech firms.
“This consultation is a significant opportunity for the EU to understand how it can address the legislative holes and barriers around AI," commented Georgina Kon, TMT partner at law firm Linklaters.
"If the EU gets this right, it will be instrumental in unlocking the potential of AI and becoming the benchmark for what good AI regulation looks like on a global scale, encouraging both significant innovation and investment in Europe.
"Some may say this plan to build a trusted ecosystem for AI is long overdue, but you can see from the paper perhaps why this has taken so long. There is a huge mesh of legislation that impacts – and could impact – AI, and a broad set of regulators that admit they haven’t got the right skills in-house to regulate the use of AI. There are also some legislative holes that need filling."