April 24, 2019
by Ciaran Daly
MOUNTAIN VIEW - After almost a year of ethics controversies and internal backlash, Google is still grabbing headlines for its cumbersome approach to the sensitive issues around, well, being in possession of the entire world’s data, as well as some questionable strategic partnerships with US military-industrial institutions. 2018 saw large-scale organization efforts on the part of Google employees, who staged walkouts, signed petitions, and even blew the whistle on some of the company’s unpopular practices.
It seems that this ongoing struggle towards an ethical future for AI is set to continue within the company. Efforts to put the issue of the company’s social responsibilities to bed have come to nothing. Just yesterday, it was revealed that certain Google employees, including Meredith Whitaker of the AI Now Institute, have been ‘reassigned’ or demoted, allegedly following their involvement in staging walk-out protests. Now, on Friday, employees plan to host a town hall meeting where they will openly blow the whistle on alleged 'retaliation' by management.
As one of the largest players in the cloud and AI spaces, what happens next at Google could define the next five years of ethical AI across the board. To understand where things might be heading, let’s see how Google got here in the first place.
April 2018
“We believe that Google should not be in the business of war”
Google’s ethical AI troubles began back in April 2018, when over 3,000 Googlers—including some of its most senior engineers—signed an open letter in protest of the firm’s participation in a Pentagon program, Project Maven. The dissenters argued that Project Maven was a breach of Google’s ‘do no harm’ rule, as it would have seen Google machine vision algorithms be deployed by the US military to interpret—and more accurately target—drone strikes.
June 2018
“It’s time to ask about governance”
Less than two months after the open letter, Google appeared to concede ground to their employees. Google CEO, Sundar Pichai, announced that the firm would be dropping out of Project Maven, and went on to publish what he called ‘concrete standards’ for Google’s ethics approach to AI moving forward.
Although
this seemed to put the issue to bed for several months, these guidelines still faced
criticism for being largely toothless. Kate Crawford, founder of AI Now, said:
“Now
the dust has settled on Google's AI principles, it's time to ask about
governance. How are they implemented? Who decides? There's no mention of
process, or people, or how they'll evaluate if a tool is 'beneficial'. Are
they... autonomous ethics?”
October
2018
“We’re grappling with questions of how AI should be used”
For some time, it appeared that Google had taken employee concerns onboard. In the fall, parent company Alphabet Inc. announced that the company would not be competing for the Pentagon’s $10 billion JEDI cloud computing contract. This would have seen the US Defense Department’s data shift to the commercial cloud. In a statement to Bloomberg, a Google spokesperson explained that competing for the contract would amount to a violation of the AI principles announced earlier in the year by Pichai.
“We are not
bidding on the JEDI contract because, first, we couldn’t be assured that it
would align with our AI principles. And second, we determined that there were
portions of the contract that were out of scope with our current government
certifications.”
The Tech
Workers Coalition, an advocacy group for technology workers, claimed that the
decision stemmed from ‘sustained’ pressure from workers who “have significant
power, and are increasingly willing to use it”. Google’s decision was followed
swiftly by employees at Microsoft, who signed an open letter calling on an end
to the bid.
At this point, it seemed that Google had taken the ethics around AI onboard. At the end of October, it launched The AI Global Impact Challenge. Billed as a $25 million ‘AI for social good’ initiative, the competition is designed to use AI to address societal grand challenges using a grant pool. Google’s AI Chief, Jeff Dean, argued it was a clear extension of the ethical principles launched earlier in the year:
“We’re
grappling with questions of how AI should be used,” Dean told the press. “AI truly
has the potential to improve people’s lives.”
November 2018
“The company’s culture is unacceptable”
Google was still
not out of the woods in November, however. Two controversies rocked the tech
giant.
Firstly, a (smaller) number of Google employees petitioned the company to quit Project Dragonfly, a search engine for the Chinese government which has previously come under fire from both human rights groups and politicians for its purported role in internet censorship. Eleven Google engineers initially signed the open letter, backed by an Amnesty International campaign.
“We are
Google employees, and we join Amnesty International in calling on Google to
cancel project Dragonfly, Google’s effort to create a censored search engine
for the Chinese market that enables state surveillance.”
The biggest scandal came in November, when widespread walkouts by Googlers took place globally in protest of alleged sexual harassment and racist incidents, as well as a general ‘lack of transparency’. Google CEO Sundar Pichai and Alphabet CEO Larry Page both apologized to staff in a company email, stating that 48 employees had been fired over sexual harassment claims in the previous two years, but that none of those employees received payouts.
This did
not prove good enough for Google employees, who walked out en masse at offices worldwide.
“We’ve
waited for leadership to fix these problems, but have come to this conclusion:
no one is going to do it for us. So we are here, standing together, protecting
and supporting each other,” an essay by protestors published in the NYT stated.
“We demand an end to the sexual harassment, discrimination, and the systemic
racism that fuel this destructive culture.”
April 2019: a reckoning?
Where are
we today? Aptly, on April 1st, Google launched an ‘external advisory
board’ for the ethical use of AI. The Advanced Technology External Advisory
Council immediately came under criticism for its largely ‘toothless’ nature—the
committee only met once a year and had no binding influence over the company’s
actions—as well as its appointees. One petition from Google employees demanded
the removal of then-board member Kay Coles James, President of conservative
thinktank The Heritage Foundation, for her comments about trans people and her
organization’s climate change denial.
In less than four days, Google announced that the board would be shut down immediately pending re-evaluation, stating that in the ‘current environment’, the ATEAC couldn’t function as desired.
The pain isn’t over yet. Yesterday, Google employees that helped to organize the November walkout came forward to say that they have been punished, demoted, reassigned, and had research projects cancelled over the walk-out. One key figure, Meredith Whitaker, was even told to cease working with the AI Now Institute, the NYU-backed ethical AI research organisation.
With an employee-organised ‘town hall’ meeting arranged around the issue for this Friday, the situation looks set only to escalate, and the future of AI and technology ethics at Google remains unclear. What is clear, though, is that Google’s 94,000 employees are growing increasingly unhappy—and more organized. It’s often easy to forget that there are people behind the technology we use every single day, and their self-organization against Google’s management could just hold the key to the ethical use of AI moving forward. Only time will tell.
Based in London, Ciarán Daly is the Editor-in-Chief of AIBusiness.com, covering the critical issues, debates, and real-world use cases surrounding artificial intelligence - for executives, technologists, and enthusiasts alike. Reach him via email here.
About the Author
You May Also Like