Microsoft vows to support US military, amid ethical AI debate
military AI

Microsoft vows to support US military, amid ethical AI debate

Microsoft CEO Satya Nadella and president Brad Smith have stressed the company’s commitment to working with the US military, during a monthly Q&A session with employees.

Their declaration comes at a time when many are questioning the ethical implications of the weaponisation of AI and other technologies.

As explained in the Q&A, Microsoft’s work in the defence space is based on three convictions:

“First, we believe in the strong defence of the United States and we want the people who defend it to have access to the nation’s best technology, including from Microsoft,” they said.

“Second, we appreciate the important new ethical and policy issues that artificial intelligence is creating for weapons and warfare. We want to use our knowledge and voice as a corporate citizen to address these in a responsible way through the country’s civic and democratic processes.

Third, we understand that some of our employees may have different views. We don’t ask or expect everyone who works at Microsoft to support every position the company takes.

Lessons from history

In reiterating its support for the US military, Microsoft referenced some past just wars that the US had been involved with, including the American Civil War and the Second World War. Smith also pointed out that ethical issues aren’t new to the US military, citing chemical, nuclear, and cyber weapons.

The company said it took the view that, as a technology leader and expert, Microsoft should help inform ethical decisions about AI and cyber warfare in the military, by collaborating with the US Department of Defense.

It’s an argument that can be summarised as, ‘While not perfect, the US military is ultimately doing good work to defend citizens, and if they’re going to be developing AI, it should be in partnership with those best placed to do a good job.’

In a subsequent blog post, Brad Smith clarified the company’s stance: “We’ve worked with the US Department of Defense on a longstanding and reliable basis for four decades.

“You’ll find Microsoft technology throughout the American military, helping power its front office, field operations, bases, ships, aircraft, and training facilities. We are proud of this relationship, as we are of the many military veterans we employ.

Artificial intelligence, augmented reality, and other technologies are raising new and profoundly important issues, including the ability of weapons to act autonomously.

“We’ll engage not only actively but proactively across the US government, to advocate for policies and laws that will ensure that AI and other new technologies are used responsibly and ethically.”

Internet of Business says

The timing of Microsoft’s renewed commitment to partnering with the US military is significant. It follows the company’s bid on the Department of Defence’s (DoD) Joint Enterprise Defense Infrastructure cloud project (JEDI), from which Google withdrew, citing conflict with its ethical principles.

The winner of the contract will re-engineer the DoD’s end-to-end IT infrastructure, from the Pentagon to field-level support of the country’s servicemen and women.

Military contracts are hugely valuable to Microsoft. The JEDI cloud project alone is thought to be worth as much as $10 billion over a ten-year period, and Microsoft is eager to secure it. The Pentagon is also planning to invest $2 billion in AI over the next five years.

Speaking at a conference in San Francisco recently, Amazon CEO Jeff Bezos also defended contracts with the DoD, despite dissent from within and some employees threatening to quit:

“If big tech companies are going to turn their back on US Department of Defense, this country is going to be in trouble,” he said. “We are going to continue to support the Department of Defense, and I think we should.”

It’s a sentiment that stands in stark contrast to Google’s new ethical AI strategy.

Some of Microsoft’s employees will take issue with the approach, through out-and-out opposition to the development of technology that could be used to injure or kill.

Others may accept the necessity of military forces, and their modernisation, while being uncomfortable with working on such projects. Microsoft’s solution is for employees to look for the work they want to do, with help from the HR team.

For all the rhetoric about ethical issues being a part of military policy since Cicero, the advent of AI is raising questions around agency and responsibility that have never been faced before.

AI systems are more than just another type of weapon; they open the way to a completely new mode of warfare – to machines autonomously taking human lives, perhaps. As defence forces, companies, and lawmakers scramble to map this new ground, the uncertainty is causing internal conflict.