Digital rights group wants more AI accountability

Digital rights group wants more AI accountability

Max Smolaks

August 21, 2019

4 Min Read

AI Business chats to Javier Ruiz, policy director at Open Rights Group, about regulatory challenges posed by AI

by Ken Wieland 21 August 2019

Lack oftransparent AI algorithms. Companies wrongly claiming ownership of personaldata. Big-tech behemoths reaping the economic benefits of AI at the expense ofothers. Javier Ruiz worries about all these things, which is hardly surprising.He is policy director at Open Rights Group (ORG), a UK-based organization thatseeks to protect citizens’ digital rights. Without proper safeguards andappropriate regulatory frameworks, Ruiz frets that AI might create a dystopiannightmare.

“We can’t have situations where the likesof banks say you can’t get a loan – the computer says no scenario – withoutknowing the reasoning behind the decision,” he says.

Ensuring transparency of AI algorithms -- where the reasoning is disclosed -- is one of the regulatory priorities of the European Commission (EC). The EC’s overall aim is to create an appropriate ethical and legal framework for AI, but how this might be shaped and implemented is far from clear.

“Tech companies are not good at self-regulation, and the notion that ethical guidelines can keep them on the straight and narrow is firmly discredited when it comes to the handling of personal data,” Ruiz says. The ORG man is not advocating heavy-handed government intervention, however. “We want freedom of expression, rather than governments controlling specifically what is said,” he adds.

What’s theoptimal approach, then, in bringing order to what some might see as an AI wildwest in the absence of a mature regulatory framework?

“There’s a need for a lot more public sector involvement in AI systems, particular those that provide public functions,” Ruiz explains. He points out that the Internet was built on open source software that still powers most servers, providing some protection against control falling into the hands of a few heavyweight tech companies.

In contrast, in the world of AI-based digital assistants and language translation systems, Ruiz thinks ecosystems are worryingly undemocratic: “If you want to build a digital assistant voice app, you pretty much have to go to Google or Amazon. They’ll happily provide their APIs, but all the data generated goes to them.” Ruiz reckons there’s a strong case for AI infrastructure of this sort to be made publicly available. 

GDPR not
enough

Ruiz welcomes theEuropean Union’s General Data Protection Regulation (GDPR), which came intoforce in May 2018. GDPR puts the onus on companies to secure personal data andseek consumers’ consent before using it, all of which is backed up hefty finesfor those that don’t comply. Big data, of course, is the fuel which powers AIand machine learning.

But Ruiz isn’t convincedGDPR is enough to stop abuses of power. “It’s very centered on protecting theindividual, but data can be anonymized to a certain degree,” he says. “Identifytrends and gaining a better understanding of how the world works through bigdata is certainly useful, but we also need to ask who benefits from thatknowledge? Whose data will be used and how will it be used? The problem rightnow is that whoever owns the technology and the data will be the ones tobenefit.”

GDPR, bydefinition, is general. Ruiz says GDPR is designed to be complemented byspecific codes of practice and conduct. “There’s space there for a much tighterregulatory approach,” he says. “Many companies tend to take a proprietary viewof the data they collect when in fact the data may originally belong to otherpeople.”

Get the newsletter
From automation advancements to policy announcements, stay ahead of the curve with the bi-weekly AI Business newsletter.