Google faces rebellion over military AI projects
X-47B stealth drone

Google faces rebellion over military AI projects

A number of employees have resigned from Google following the search giant’s recent deal to provide artificial intelligence to the US military. Thousands of others have signed an internal petition in an effort to persuade CEO Sundar Pichai to withdraw Google from “the business of war”.

Around twelve Google employees are believed to have left their jobs because of the company’s decision to provide artificial intelligence to the Pentagon as part of the US military’s Project Maven, according to Gizmodo.

Project Maven seeks to use machine learning and computer vision techniques to improve the gathering of battlefield intelligence.

According to the Pentagon, the project aims to develop and integrate “computer-vision algorithms needed to help military and civilian analysts encumbered by the sheer volume of full-motion video data that Department of Defense collects every day in support of counterinsurgency and counterterrorism operations.”

It’s expected to develop artificial intelligence capable of sifting through vast quantities of aerial imagery and recognising objects of interest.

This, many Google staff fear, puts the project on a slippery slope towards the weaponisation of AI, as the technology could easily be applied to improve the efficacy of drone strikes, for example. The company has also been urged to consider its loyalties to a global base of users, famously summed up in its ‘Don’t be evil’ motto.

In April, the Tech Workers Coalition launched a petition asking Google to cancel its Project Maven contract, and demanding that other technology giants avoid working with the US military. “We can no longer ignore our industry’s and our technologies’ harmful biases, large-scale breaches of trust, and lack of ethical safeguards,” the petition read. “These are life and death stakes.”

Google caught between principles and revenue

The internal pushback at Google has occurred against the backdrop of a wider, and increasingly complex, conversation in the technology industry about relationships with governments.

While the likes of Amazon, Microsoft, and IBM are also working closely with the Pentagon, for example, over 30 technology companies – including Facebook and Microsoft, but not Amazon, Apple, or Alphabet  – signed an Accord earlier this year stating that they would refuse to aid any government, including the US, in carrying out cyber attacks.

But money talks for Google/Alphabet and other companies, for whom government contracts are invariably the biggest. For example, Google is one of several companies thought to be in the running for a Pentagon cloud services contract worth more than $10 billion, known (to the dismay of Star Wars fans everywhere) as the Joint Enterprise Defense Infrastructure (JEDI).

Academics weigh in with AI concerns

The world of academia has also raised concerns over Google’s work with the Pentagon. Over 90 academics in the spheres of ethics, AI, and computer science this week published an open letter asking Google to back an international treaty prohibiting autonomous weapons systems, and cease work with the US military.

“If ethical action on the part of tech companies requires consideration of who might benefit from a technology and who might be harmed, then we can say with certainty that no topic deserves more sober reflection – no technology has higher stakes – than algorithms meant to target and kill at a distance and without public accountability,” reads the letter.

“Google has moved into military work without subjecting itself to public debate or deliberation, either domestically or internationally. While Google regularly decides the future of technology without democratic public engagement, its entry into military technologies casts the problems of private control of information infrastructure into high relief.”

Internet of Business says

The automation of warfare seems to be an unstoppable force at present. Drones are an increasingly important strategic tool, while the US’ Loyal Wingman programme is working towards autonomous jet fighters.

Earlier this year, Southampton University aerial robotics expert Jim Scanlan expressed the opinion that “BAe has probably made its last manned fighter jet”, in a conversation with Internet of Business editor, Chris Middleton. The future is robotic, he said.

In February at the Westminster eForum event on UK AI policy, Richard Moyes, managing director of Article 36 (a not-for-profit organisation working to prevent the unintended or unnecessary harm caused by weapons systems), identified the moral hazards at the core of this debate. He said that while each of the steps towards a technology outcome might seem reasonable in isolation – including keeping our own armed forces out of harm’s way – the end result is often morally questionable.

However, the most pressing issue, said Moyes, is the “dilution of human control, and therefore of human moral agency”.

“The more we see these discussions taking place,” he continued, “the more we see a stretching of the legal framework, as the existing legal framework gets reinterpreted in ways that enable greater use of machine decision-making, where previously human decision-making would have been assumed.”

The controversy should also be seen in the light of increasing debate about the ethics of AI in any application, given the technology’s ability to automate or perpetuate human bias, and the challenge it presents to core legal principles, such as liability.

Its potential to replace human beings is also high in many people’s minds – not least since the debut of Google’s Duplex system last week.

The UK is one of many countries with a double-headed approach to AI; on the one hand it is pursuing a new role for itself in the vanguard of ethical development and deployment, while on the other rolling out a national surveillance programme, parts of which the High Court found last month to be illegal.