Digital rights group wants more AI accountability

Max Smolaks

August 21, 2019

4 Min Read

AI Business chats to Javier Ruiz, policy director at Open Rights Group, about regulatory challenges posed by AI

by Ken Wieland 21 August 2019

Lack of
transparent AI algorithms. Companies wrongly claiming ownership of personal
data. Big-tech behemoths reaping the economic benefits of AI at the expense of
others. Javier Ruiz worries about all these things, which is hardly surprising.
He is policy director at Open Rights Group (ORG), a UK-based organization that
seeks to protect citizens’ digital rights. Without proper safeguards and
appropriate regulatory frameworks, Ruiz frets that AI might create a dystopian
nightmare.

“We can’t have situations where the likes
of banks say you can’t get a loan – the computer says no scenario – without
knowing the reasoning behind the decision,” he says.

Ensuring transparency of AI algorithms -- where the reasoning is disclosed -- is one of the regulatory priorities of the European Commission (EC). The EC’s overall aim is to create an appropriate ethical and legal framework for AI, but how this might be shaped and implemented is far from clear.

“Tech companies are not good at self-regulation, and the notion that ethical guidelines can keep them on the straight and narrow is firmly discredited when it comes to the handling of personal data,” Ruiz says. The ORG man is not advocating heavy-handed government intervention, however. “We want freedom of expression, rather than governments controlling specifically what is said,” he adds.

What’s the
optimal approach, then, in bringing order to what some might see as an AI wild
west in the absence of a mature regulatory framework?

“There’s a need for a lot more public sector involvement in AI systems, particular those that provide public functions,” Ruiz explains. He points out that the Internet was built on open source software that still powers most servers, providing some protection against control falling into the hands of a few heavyweight tech companies.

In contrast, in the world of AI-based digital assistants and language translation systems, Ruiz thinks ecosystems are worryingly undemocratic: “If you want to build a digital assistant voice app, you pretty much have to go to Google or Amazon. They’ll happily provide their APIs, but all the data generated goes to them.” Ruiz reckons there’s a strong case for AI infrastructure of this sort to be made publicly available. 

GDPR not
enough

Ruiz welcomes the
European Union’s General Data Protection Regulation (GDPR), which came into
force in May 2018. GDPR puts the onus on companies to secure personal data and
seek consumers’ consent before using it, all of which is backed up hefty fines
for those that don’t comply. Big data, of course, is the fuel which powers AI
and machine learning.

But Ruiz isn’t convinced
GDPR is enough to stop abuses of power. “It’s very centered on protecting the
individual, but data can be anonymized to a certain degree,” he says. “Identify
trends and gaining a better understanding of how the world works through big
data is certainly useful, but we also need to ask who benefits from that
knowledge? Whose data will be used and how will it be used? The problem right
now is that whoever owns the technology and the data will be the ones to
benefit.”

GDPR, by
definition, is general. Ruiz says GDPR is designed to be complemented by
specific codes of practice and conduct. “There’s space there for a much tighter
regulatory approach,” he says. “Many companies tend to take a proprietary view
of the data they collect when in fact the data may originally belong to other
people.”

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like