AI Business is part of the Informa Tech Division of Informa PLC
This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.
Back in April, over 3,000 Google employees - including senior figures - signed an open letter in protest of the search giant's participation in the Pentagon-run Project Maven. Project Maven saw Google machine vision technology being leveraged to 'improve' the targeting of U.S. drone strikes, in what the open letter referred to as a 'biased and weaponized' use of AI.
“This plan will irreparably damage Google’s brand and its ability to compete for talent," the letter said. "Google is already struggling to keep the public’s trust. […] Building this technology to assist the US Government in military surveillance – and potentially lethal outcomes – is not acceptable.”
Less than two months later, Google CEO Sundar Pichai has responded publicly by setting out core ethical principles for the company's applications of AI and machine learning going forward. Pichai refers to these as 'concrete standards' that will 'actively govern' Google's research and product development, and more significantly, will impact their business decisions.
"We recognize that such powerful technology raises equally powerful questions about its use," Pichai says in a blog post on the company's site. "How AI is developed and used will have a significant impact on society for many years to come. As a leader in AI, we feel a deep responsibility to get this right."
The 'concrete standards' Pichai outlines are as follows:
In addition, Pichai promises that Google will not design or deploy AI 'in any of the following application areas':
Although Pichai has forbidden the use of AI in weaponry, he said in a caveat that "while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas", including cybersecurity, training, military recruitment, veterans' healthcare, and search and rescue.
The statement was well-received by some Googlers. Ed H. Chi, Google's Principal Scientist, said in a tweet that it was 'gratifying' to see the principles announced. "I'm particularly happy to see that we will tell the world more about the ML Fairness work we have been doing inside Google," he wrote.
For some, however, Pichai's statement will not go far enough. Kate Crawford, Co-Founder of the AI Now Institute, a thinktank examining the social implications of AI, argued in a tweet that the statement contained 'no real accountability'.
"Now the dust has settled on Google's AI principles, it's time to ask about governance," wrote Crawford. "How are they implemented? Who decides? There's no mention of process, or people, or how they'll evaluate if a tool is 'beneficial'... Principles minus process, or verification, or internal appeal structure, or independent review = no real accountability."
It remains to be seen whether Google will stay true to these principles or not. What it does show is that keeping AI - and its developers - ethical, transparent, and above all, accountable, holds the key to public trust in AI.