June 28, 2023
At a Glance
- ICO raises concerns over privacy risks associated with generative AI tools, calling for increased compliance from businesses.
The U.K.’s data watchdog is calling on businesses to address privacy risks with generative AI tools before releasing them.
The Information Commissioner's Office (ICO) said businesses deploying generative AI should employ tougher checks to ensure they’re compliant with data protection laws.
Stephen Almond, the ICO’s executive director of regulatory risk, said that while generative AI presents a lucrative opportunity for businesses, there are risks that come with it.
“Businesses are right to see the opportunity that generative AI offers, whether to create better services for customers or to cut the costs of their services. But they must not be blind to the privacy risks,” Almond said at Politico’s Global Tech Day.
Stay updated. Subscribe to the AI Business newsletter
“Spend time at the outset to understand how AI is using personal information, mitigate any risks you become aware of, and then roll out your AI approach with confidence that it won't upset customers or regulators.”
The U.K.’s data watchdog has been monitoring generative AI since ChatGPT’s launch last November.
At a Westminster Forum Policy event late last year, Almond said the ICO is constantly monitoring “novel risks” from emerging technologies. In 2022, the watchdog clamped down on emotional analysis AI tools, contending their use leads to bias and discrimination.
Tackle privacy risks first
At the Politico event last week, Almond said the ICO would be “checking whether businesses have tackled privacy risks before introducing generative AI – and taking action where there is risk of harm to people through poor use of their data.”
“There can be no excuse for ignoring risks to people’s rights and freedoms before rollout," he said. “Businesses need to show us how they’ve addressed the risks that occur in their context – even if the underlying technology is the same. An AI-backed chat function helping customers at a cinema raises different questions compared with one for a sexual health clinic, for instance.”
The U.K. government has tasked regulators to come up with sector-specific rules on AI. Regulators would have to adhere to a series of principles outlined by a government white paper when implementing said rules.
The government argues that its approach is more pro-innovation that the EU AI Act.
However, figures published by Appraise Network and YouGov show that two-thirds of MPs don’t have confidence in regulators to govern AI.
About the Author(s)
You May Also Like