UK Data Watchdog Monitoring Generative AI Risks

ICO ‘committed’ to being ‘ahead of the curve.’

Ben Wodecki, Jr. Editor

November 24, 2022

3 Min Read
ICO logo

The U.K.’s data watchdog, the Information Commissioner's Office (ICO), is monitoring generative AI and other “novel risks” that are emerging.

Speaking at the 'Westminster eForum on Next Steps for U.K. AI,' Stephen Almond, the ICO’s director of technology and innovation, disclosed the watchdog's next target following a recent clamp down on emotional analysis AI tools as potentially leading to “bias and discrimination.”

He confirmed the ICO was monitoring generative AI, which is quickly gaining traction in the AI community.

Generative AI, such as text-to-image tools like DALL-E and Stable Diffusion, is fast gaining adoption among mainstream audiences but has raised copyright concerns for platforms. Getty Images, for example, has decided to bar images made using generative AI tools due to potential legal troubles. But it also later partnered with Bria to incorporate AI visual content tools on its platform that Getty said respects intellectual property rights of creators.

Almond also said the watchdog is monitoring AI-powered recruitment tools. The EU’s prospective AI Act would impose strict restrictions on such systems, including mandatory human oversight measures.

“We're committed (to) examining concerns about the use of algorithms to sift recruitment applications, and particularly the risk that negatively impacts employment opportunities of those people from diverse backgrounds.”

Related:Shutterstock Raises Generative AI Capabilities with LG Deal

Referring to other issues the office is monitoring, Almond said, “We are continually scanning the horizon and investing our resources to look at novel risks that are emerging. Whether that's looking at new kind of new forms of technology, but also whether it's looking at some of the inherent risks that come about.”

He referenced the ICO’s current work in monitoring security risks that could arise from model inferencing attacks, describing its work as “a continual challenge.”

“We're really committed to trying to lean in and make sure that we are ahead of the curve and that our guidance and our support is continually up to date," he said. “That's why the services we provide like our Innovation Services are so crucial for us because we want organizations to engage with us on data protections, the risks that they see and discuss with us what is the best way of mitigating these.”

Clearview AI

One company the ICO took action against was facial recognition startup Clearview AI. The ICO hit the company with a $9.4 million fine back in May and ordered it to delete any data it held on U.K. citizens.

Almond said that the action against Clearview resulted in a positive outcome as it brought the watchdog closer together with counterparts from across the globe. The ICO worked in tandem on its Clearview investigation alongside Australia’s data watchdog.

“Ultimately, data flows and AI supply chains are global. We understand this and we understand that for firms that are looking to operate in one jurisdiction, they'll be looking at how do they make it work across multiple jurisdictions," he said.

“And so it's really incumbent on us as the data protection regulator to make sure that we're working in lockstep with our partners in other jurisdictions to try and make it easy and predictable for organizations that actually, we will take action in concert, around breaches that we see just like those in the case of Clearview.”

About the Author

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like