Google Cloud tackles adoption roadblocks with AI explainability toolkit

Google Cloud tackles adoption roadblocks with AI explainability toolkit

Max Smolaks

November 21, 2019

4 Min Read

Inside the mind of the machine

by Max Smolaks 21 November 2019

Google Cloud has launched AI Explanations– a set of tools and frameworks thatenable the creation of easy to read summaries that explain why a machinelearning model made the decisions it did.

Issues with transparency and explainabilityhas long been cited as barrier to wider AI adoption, especially in the fieldswhere algorithmic decisions could affect people directly – like human resourcesand law enforcement.

Google is one of the world’s largest publiccloud vendors and is making a bet on the fact that most of the algorithm trainingof the future will take place in cloud data centers.

“We’re building AIthat’s fair, responsible and trustworthy and we’re excited to introduce explainableAI, which helps humansunderstand how a machine learning model reaches its conclusions,” Tracy Frey, directorof strategy for Cloud AI at GCP, wrote in a blog post shared with themedia.

Google has published a 27-page whitepaper detailing methodology and appropriate uses for its toolkit, aimed at primarily at model developers and data scientists. Some of the examples within are universal, while others rely heavily on GCP services.

Trust issues

Explainability in AI is not a new topic. “One of the issues people have always had with neural nets is that they usually get the right answer, but we don’t know how and why they get there, i.e., we don’t have convincing explanations for their answers,” VMware’s chief research officer, David Tennenhouse, told AI Business back in 2017.

The core of the problem is this: how do you designalgorithms complex enough they can see the patterns that humans can’t identify,and yet not so complex as to obscure the exact the reason for the outcome?

“AIcan unlock new ways to make businesses more efficient and create newopportunities to delight customers. That said, as with any new data-driven decision-makingtool, it can be a challenge to bring machine learning models into a business,without confidence in the underlying data and recommendations," Frey wrote.

“We’re striving to make the most straightforward, useful explanation methods available to our customers, while being transparent about how they work (or when they don’t!)”

Some of the early customers using the explainability toolkit include Sky, Solar energy company Vivint Solar, and iRobot – primarily known for its automated Roomba vacuum cleaners. By the way, this year they started making robotic lawn mowers, and the first shipments are expected in 2020.

A matter of principle

Responsible use of AI is a personal topic for Google, which was forced to define its ethical position on machine learning tech back in 2018, in response to employee backlash over participation in Pentagon’s Project Maven – which employed machine learning to scan through military drone footage, among other things. To quell the unrest, Google developed and published its AI Principles – a list of seven rules that define the kinds of AI products that will be built by Google – and examples of some of some that won’t, including “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.”

It is these principles that supposedly guide muchof Google’s work on explainable AI, and we were told they are more than just apretty online document.

“We're really careful that every time when weare advising companies on how to run particular projects, to make sure yourrefer back to those in order to help folks from inadvertently making a misstepin how they operate,” Andrew Moore, VP and Head of Google Cloud AI, told AIBusiness at the Google Cloud Next conference in London this week.

“About two weeks ago in Washington, DC, thatwas a big public meeting on AI for national security, and Kent Walker, [Google]chief legal officer was there, describing very clearly how Google does not wantto be involved in development of weapons. But it was very open: and he talkedabout many of the examples of us working with military services in the UnitedStates and elsewhere.”

“There are, frankly, many ways that peoplecould abuse artificial intelligence. And so one of the things that we're tryingto do as a company is to help make sure that our customers and the world ingeneral thinks through these kinds of things carefully, in the principled way.”

Get the newsletter
From automation advancements to policy announcements, stay ahead of the curve with the bi-weekly AI Business newsletter.