AI Business is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.

Practitioner Portal

UK's ICO publishes guidance on AI and data protection

by Rachel England
Article ImageThe document aims to help organizations mitigate the risks of using personal data in AI applications

The UK Information Commissioner’s Office (ICO) has published guidance on data protection issues arising from the use of artificial intelligence. The 80-page document aims to help organizations mitigate the range of risks inherent in using personal data in machine learning applications, focusing on the questions of accountability, governance, legality and security.

The work is the culmination of two years’ research and consultation by the ICO’s AI team and Reuben Binns, associate professor of computer science at the University of Oxford. It recognizes that AI systems use data in different ways for different purposes, and encourages organizations to separate distinct processing operations, and to identify the purpose and an appropriate lawful basis for each.

The guide is not legally binding, but aims to offer support and methodologies on how to approach work with AI.

A best practice framework

The document is primarily aimed at two audiences, the ICO says: “Those with a compliance focus, such as data protection officers (DPOs), general counsel, risk managers, senior management, and the ICO's own auditors; and technology specialists, including machine learning experts, data scientists, software developers and engineers, and cyber security and IT risk managers”.

Simon McDougall, deputy commissioner for regulatory, innovation, and technology at the ICO, aknowledged that while AI offers opportunities that could bring “marked improvements for society,” processing personal data through complex and “sometimes opaque” AI-based systems comes with risks.

“Understanding how to assess compliance with data protection principles can be challenging in the context of AI,” he said. “From the exacerbated, and sometimes novel, security risks that come from the use of AI systems, to the potential for discrimination and bias in the data. It is hard for technology specialists and compliance experts to navigate their way to compliant and workable AI systems.

“It is my hope this guidance will answer some of the questions I know organisations have about the relationship between AI and data protection, and will act as a roadmap to compliance for those individuals designing, building and implementing AI systems.” He added that the ICO is open to feedback, and the guidance is likely to evolve as the technology develops.

Practitioner Portal - for AI practitioners

Story

MLOps startup Verta gets $10m in funding, launches first product

9/1/2020

The company plans to commercialize open source ModelDB project, developed by CEO Manasi Vartak

Story

AI and analytics services: Capabilities and costs

8/27/2020

Which skills do you need in your team? What are the costs for running the service? How can you optimize them? These are three key questions when setting-up and running an AI and analytics service.

Practitioner Portal

EBooks

More EBooks

Upcoming Webinars

Archived Webinars

More Webinars
AI Knowledge Hub

Experts in AI

Partner Perspectives

content from our sponsors

Research Reports

More Research Reports

Infographics

Smart Building AI

Infographics archive

Newsletter Sign Up


Sign Up