MOUNTAIN VIEW, CA – Following significant internal backlash at Google against the firm’s participation in a U.S. military drone surveillance program, CEO Sundar Pichai has published a list of seven key ethical principles to guide the company’s use of AI.

Back in April, over 3,000 Google employees – including senior figures – signed an open letter in protest of the search giant’s participation in the Pentagon-run Project Maven. Project Maven saw Google machine vision technology being leveraged to ‘improve’ the targeting of U.S. drone strikes, in what the open letter referred to as a ‘biased and weaponized’ use of AI.

“This plan will irreparably damage Google’s brand and its ability to compete for talent,” the letter said. “Google is already struggling to keep the public’s trust. […] Building this technology to assist the US Government in military surveillance – and potentially lethal outcomes – is not acceptable.”

Less than two months later, Google CEO Sundar Pichai has responded publicly by setting out core ethical principles for the company’s applications of AI and machine learning going forward. Pichai refers to these as ‘concrete standards’ that will ‘actively govern’ Google’s research and product development, and more significantly, will impact their business decisions.

“We recognize that such powerful technology raises equally powerful questions about its use,” Pichai says in a blog post on the company’s site. “How AI is developed and used will have a significant impact on society for many years to come. As a leader in AI, we feel a deep responsibility to get this right.”

The ‘concrete standards’ Pichai outlines are as follows:

  1. Be socially beneficial
  2. Avoid creating or reinforcing unfair bias
  3. Be built and tested for safety
  4. Be accountable to people
  5. Incorporate privacy design principles
  6. Uphold high standards of scientific excellence
  7. Be made available for uses that accord with these principles

In addition, Pichai promises that Google will not design or deploy AI ‘in any of the following application areas’:

  1. Technologies that cause or are likely to cause overall harm.
  2. Weapons or other technologies whose principal purpose is to cause or directly facilitate injury to people.
  3. Technologies that gather or use information for surveillance violating internationally accepted norms
  4. Technologies who purpose contravenes widely accepted principles of international law and human rights.

Although Pichai has forbidden the use of AI in weaponry, he said in a caveat that “while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas”, including cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue.

The statement was well-received by some Googlers. Ed H. Chi, Google’s Principal Scientist, said in a tweet that it was ‘gratifying’ to see the principles announced. “I’m particularly happy to see that we will tell the world more about the ML Fairness work we have been doing inside Google,” he wrote.

For some, however, Pichai’s statement will not go far enough. Kate Crawford, Co-Founder of the AI Now Institute, a thinktank examining the social implications of AI, argued in a tweet that the statement contained ‘no real accountability’.

“Now the dust has settled on Google’s AI principles, it’s time to ask about governance,” wrote Crawford. “How are they implemented? Who decides? There’s no mention of process, or people, or how they’ll evaluate if a tool is ‘beneficial’… Principles minus process, or verification, or internal appeal structure, or independent review = no real accountability.”

It remains to be seen whether Google will stay true to these principles or not. What it does show is that keeping AI – and its developers – ethical, transparent, and above all, accountable, holds the key to public trust in AI.