Google Cloud tackles adoption roadblocks with AI explainability toolkit

Google Cloud tackles adoption roadblocks with AI explainability toolkit

Max Smolaks

November 21, 2019

4 Min Read

Inside the mind of the machine

by Max Smolaks 21 November 2019

Google Cloud has launched AI Explanations
– a set of tools and frameworks that
enable the creation of easy to read summaries that explain why a machine
learning model made the decisions it did.

Issues with transparency and explainability
has long been cited as barrier to wider AI adoption, especially in the fields
where algorithmic decisions could affect people directly – like human resources
and law enforcement.

Google is one of the world’s largest public
cloud vendors and is making a bet on the fact that most of the algorithm training
of the future will take place in cloud data centers.

“We’re building AI
that’s fair, responsible and trustworthy and we’re excited to introduce explainable
AI, which helps humans
understand how a machine learning model reaches its conclusions,” Tracy Frey, director
of strategy for Cloud AI at GCP, wrote in a blog post shared with the

Google has published a 27-page whitepaper detailing methodology and appropriate uses for its toolkit, aimed at primarily at model developers and data scientists. Some of the examples within are universal, while others rely heavily on GCP services.

Trust issues

Explainability in AI is not a new topic. “One of the issues people have always had with neural nets is that they usually get the right answer, but we don’t know how and why they get there, i.e., we don’t have convincing explanations for their answers,” VMware’s chief research officer, David Tennenhouse, told AI Business back in 2017.

The core of the problem is this: how do you design
algorithms complex enough they can see the patterns that humans can’t identify,
and yet not so complex as to obscure the exact the reason for the outcome?

can unlock new ways to make businesses more efficient and create new
opportunities to delight customers. That said, as with any new data-driven decision-making
tool, it can be a challenge to bring machine learning models into a business,
without confidence in the underlying data and recommendations," Frey wrote.

“We’re striving to make the most straightforward, useful explanation methods available to our customers, while being transparent about how they work (or when they don’t!)”

Some of the early customers using the explainability toolkit include Sky, Solar energy company Vivint Solar, and iRobot – primarily known for its automated Roomba vacuum cleaners. By the way, this year they started making robotic lawn mowers, and the first shipments are expected in 2020.

A matter of principle

Responsible use of AI is a personal topic for Google, which was forced to define its ethical position on machine learning tech back in 2018, in response to employee backlash over participation in Pentagon’s Project Maven – which employed machine learning to scan through military drone footage, among other things. To quell the unrest, Google developed and published its AI Principles – a list of seven rules that define the kinds of AI products that will be built by Google – and examples of some of some that won’t, including “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.”

It is these principles that supposedly guide much
of Google’s work on explainable AI, and we were told they are more than just a
pretty online document.

“We're really careful that every time when we
are advising companies on how to run particular projects, to make sure your
refer back to those in order to help folks from inadvertently making a misstep
in how they operate,” Andrew Moore, VP and Head of Google Cloud AI, told AI
Business at the Google Cloud Next conference in London this week.

“About two weeks ago in Washington, DC, that
was a big public meeting on AI for national security, and Kent Walker, [Google]
chief legal officer was there, describing very clearly how Google does not want
to be involved in development of weapons. But it was very open: and he talked
about many of the examples of us working with military services in the United
States and elsewhere.”

“There are, frankly, many ways that people
could abuse artificial intelligence. And so one of the things that we're trying
to do as a company is to help make sure that our customers and the world in
general thinks through these kinds of things carefully, in the principled way.”

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like