WASHINGTON, DC – The industry backlash against military AI applications has already begun, as AI experts around the world this week implored business leaders and governments not to fund research into AI-powered automated military equipment.
Over 3000 Google employees – including senior engineers – have signed an open letter in protest of the firm’s participation in a Pentagon program, ‘Project Maven’, that uses Google’s machine vision platform to interpret video imagery in order to improve the targeting of drone strikes, The New York Times reports.
Writing to Google CEO Sundar Pichai, the signees demand that the project be cancelled, and that Google formulate ‘a clear policy’ stating that neither the company nor its contractors will ever develop technology for warfare. In the strongly-worded letter, they argue that “Google should not be in the business of war.”
“Google is implementing Project Maven, a customized AI surveillance engine that uses ‘Wide Area Motion Imagery’ data captured by US Government drones to detect vehicles and other objects, track their motions, and provide results to the Department of Defense.”
“This plan will irreparably damage Google’s brand and its ability to compete for talent. Amid growing fears of biased and weaponized AI, Google is already struggling to keep the public’s trust. […] Building this technology to assist the US Government in military surveillance – and potentially lethal outcomes – is not acceptable.”
At a recent companywide meeting, employees raised earlier concerns about Project Maven. A company spokesman claims most of the signatures on the protest letter had been collected before the company had an opportunity to explain the situation, describing its work on the project as ‘non-offensive’ in nature in a public statement on Tuesday. Without referencing the letter itself, Google said that “any military use of machine learning naturally raises valid concerns.”
They added that the Pentagon was using ‘open-source object recognition software available to any Google Cloud customer’, based on unclassified datasets. “The technology is used to flag images for human review and is intended to save lives and save people from having to do highly tedious work.”
The controversy is part of a wider global debate across stakeholder lines regarding ethical uses of AI technologies, and this was highlighted by news that AI researchers from over 30 countries are boycotting a South Korean university over its plans to build a new AI in partnership with a leading defence firm.
Over 50 academics signed an open letter calling for a boycott of Korea Advanced Institute of Science and Technology (KAIST) and defence manufacturer Hanwha Systems, saying they would refuse to collaborate with the university or host its visitors due to fears it would seek to “accelerate the arms race to develop” autonomous weapons.
“There are plenty of great things you can do with AI that save lives, including in a military context, but to openly declare the goal is to develop autonomous weapons and have a partner like this sparks huge concern,” said Toby Walsh, a professor at the University of New South Wales and the organizer of the boycott. “This is a very respected university partnering with a very ethically dubious partner that continues to violate international norms.”
As AI becomes more accessible, developers and businesses have a duty to consider the ethical implications of how the technology is used, no matter what the use case. With leading technologists like Elon Musk calling for a ban on killer robots, there’s a clear cultural clash between the considerations of the international community, tech leaders, and researchers, versus the research initiatives of governments, militaries, and defence firms. Calls for proactive legislation against autonomous weapons technology are already here, and will only grow louder.