UK Data Watchdog Warning: Address Privacy Risks in Generative AI

ICO wants businesses to employ tougher checks on generative AI

Ben Wodecki, Jr. Editor

June 28, 2023

2 Min Read

At a Glance

  • ICO raises concerns over privacy risks associated with generative AI tools, calling for increased compliance from businesses.

The U.K.’s data watchdog is calling on businesses to address privacy risks with generative AI tools before releasing them.

The Information Commissioner's Office (ICO) said businesses deploying generative AI should employ tougher checks to ensure they’re compliant with data protection laws.

Stephen Almond, the ICO’s executive director of regulatory risk, said that while generative AI presents a lucrative opportunity for businesses, there are risks that come with it.

“Businesses are right to see the opportunity that generative AI offers, whether to create better services for customers or to cut the costs of their services. But they must not be blind to the privacy risks,” Almond said at Politico’s Global Tech Day.

Stay updated. Subscribe to the AI Business newsletter

“Spend time at the outset to understand how AI is using personal information, mitigate any risks you become aware of, and then roll out your AI approach with confidence that it won't upset customers or regulators.”

The U.K.’s data watchdog has been monitoring generative AI since ChatGPT’s launch last November.

At a Westminster Forum Policy event late last year, Almond said the ICO is constantly monitoring “novel risks” from emerging technologies. In 2022, the watchdog clamped down on emotional analysis AI tools, contending their use leads to bias and discrimination.

Tackle privacy risks first

At the Politico event last week, Almond said the ICO would be “checking whether businesses have tackled privacy risks before introducing generative AI – and taking action where there is risk of harm to people through poor use of their data.”

“There can be no excuse for ignoring risks to people’s rights and freedoms before rollout," he said. “Businesses need to show us how they’ve addressed the risks that occur in their context – even if the underlying technology is the same. An AI-backed chat function helping customers at a cinema raises different questions compared with one for a sexual health clinic, for instance.”

The U.K. government has tasked regulators to come up with sector-specific rules on AI. Regulators would have to adhere to a series of principles outlined by a government white paper when implementing said rules.

The government argues that its approach is more pro-innovation that the EU AI Act.

Speaking at the recent AI Summit London, AI experts from Deloitte said the U.K.'s approach was less rigid compared to EU AI legislative attempts.

However, figures published by Appraise Network and YouGov show that two-thirds of MPs don’t have confidence in regulators to govern AI.

About the Author(s)

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like