AI Business is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.

Alphabet CEO: “There is no question” AI needs to be regulated

by Max Smolaks
Article Image

Surprising comments from one of the world’s largest AI vendors

by Max Smolaks 20 January 2020

Sundar Pichai, CEO of Google’s parent company Alphabet, surprised the audience at an event in Brussels on Monday by inviting governments to regulate the use of artificial intelligence.

Google CEO Sundar Pichai
Sundar Pichai © Hung Vu

“There is no question in my mind that artificial intelligence needs to be regulated. The question is how best to approach this,” he said in a speech, later published by the Financial Times.

Pichai gave his comments at a closed conference organized by Bruegel, an independent European think tank that specialises in economics, where he was due to meet Margrethe Vestager, EU’s commissioner for competition, recently appointed as executive vice president for a Europe Fit for the Digital Age.

The meeting comes at a time when governments on both sides of the Atlantic are working out whether they should take a more active role in regulating AI products and services, and how they could do it.

Regulate
me

Google doesn’t have a great track record with European regulators; over the past ten years, it was repeatedly found to be engaged in anti-competitive practices, and paid more than $8.9 billion in EU fines.

But
despite a frosty relationship, Pichai admitted that some degree of
oversight from Brussels might be necessary: “Companies such as ours
cannot just build promising new technology and let market forces
decide how it will be used,” he said. “It is equally incumbent on
us to make sure that technology is harnessed for good and available
to everyone.”

At the meeting in Brussels, Pichai warned of “negative consequences” of artificial intelligence, identifying ‘deepfakes’ and abuses of facial recognition technology, but called for a sensible approach that wouldn’t endanger some of the more positive developments, like applications of AI in healthcare – which are very much related to the matters of data privacy.

“Sensible
regulation must also take a proportionate approach, balancing
potential harms with social opportunities,” Pichai said.

Ethical guidelines for AI are a tough nut to crack, even in a corporate environment: back in 2018, Google drew up a set of seven AI principles to guide its R&D, ostensibly as a response to employee backlash over participation in Pentagon’s Project Maven, which employed machine learning to scan through military drone footage, among other things.

The document also defined the kinds of AI products that Google wouldn’t build, including “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.”

However, when the company attempted to develop its ethical AI position further by establishing an AI ethics board, the initiative suffered from multiple setbacks and lasted just over a week before being cancelled.

“We want to be a helpful and engaged partner to regulators as they grapple with the inevitable tensions and trade-offs. We offer our expertise, experience and tools as we navigate these issues together,” Pichai said.

“There is no doubt that a revolutionary force such as AI needs to have checks and balances; it absolutely does," commented Patrick Smith, field CTO for EMEA at Pure Storage. "However, trying to impose regulation is quite a black and white response to a complex and nuanced technology such as AI.

"For example, who sets the regulation? The laws will need input from a wide range of industries and perspective for it to work. Will new laws apply to Narrow AI, look to the future with General AI or address both areas? Will the laws be backdated? One issue is that of retrospective responsibility; will companies be held accountable for any previous action that would then be deemed unlawful? Co-operation from the tech companies will be vital in driving through any regulation, so they can’t feel like they are under threat."

The EU is involved in several projects that aim to inform its approach to AI: one of these, the Ad Hoc Committee on Artificial Intelligence, or CAHAI, was established by the Council of Europe in September 2019 to examine the feasibility of a binding legal framework to address the challenges to human rights that could be posed by AI applications. The first report from this organization is expected in the first half of 2020.

Practitioner Portal - for AI practitioners

Story

Open source platform aims to speed up autonomous car development

7/6/2020

Project ASLAN promises easy to install, fully documented and stable self-driving software for specific low-speed urban autonomous applications

Story

IBM donates AI fairness and explainability tools to the Linux Foundation

6/29/2020

Three projects move under the wing of the open source organization

Practitioner Portal

EBooks

More EBooks

Upcoming Webinars

Experts in AI

Partner Perspectives

content from our sponsors

Research Reports

9/30/2019
More Research Reports

Infographics

AI tops the list of most impactful emerging technologies

Infographics archive

Newsletter Sign Up


Sign Up