Cue weaponised A.I. as autonomous UK tech shines on battlefield

Cue weaponised A.I. as autonomous UK tech shines on battlefield

Updated 27 September

Weaponised AI took a step closer to reality this week as an autonomous British system able to scour city streets for enemy troops was tested by Canadian soldiers in mock battle.

The SAPIENT system – standing for Sensors for Asset Protection using Integrated Electronic Network Technology – deploys sensor arrays, automation, and artificial intelligence to present soldiers with data about unusual activity, such as people near a checkpoint or sudden changes in behaviour.

The tests saw “British sensors making autonomous decisions”, such as what to look for, warning soldiers of any dangers, according to an announcement from the UK government this morning. Some of the sensors were carried by troops, while others were placed on the ground.

Defence or attack?

With current in-service technology, troops have to monitor live camera feeds during operations on city streets. The government says that SAPIENT is designed to take that load away from soldiers and reduce the risk of human error, as well as cut the number of troops needed in the operations room.

How the system itself is free from error was not explained in the announcement.

Defence minister Stuart Andrew said, “Investing millions in advanced technology like this will give us the edge in future battles. It also puts us in a really strong position to benefit from similar projects run by our allies as we all strive for a more secure world.”

SAPIENT was developed by the Defence Science and Technology Laboratory (DSTL) and industry partners, and co-funded initially by Innovate UK. Since 2016, the programme has been funded solely by DSTL, which is part of the Ministry of Defence.

The system was tested alongside a host of other experimental rigs, including robotic exoskeletons and new night vision and surveillance systems.

The trials were part of the Contested Urban Environment programme (CUE), which is set to come to US streets in 2019, and to the UK in 2020. CUE is designed to bring together the ‘Five Eyes’ nations of the UK, US, Australia, Canada, and New Zealand, and put the latest technology in the hands of soldiers on the ground.

Over 230 scientists and troops have been testing the technologies in Montreal, during a three-week programme that culminated in a live battlefield simulation.

Allied solutions

In addition to SAPIENT, a range of unmanned aerial and ground vehicles were used to relay information to an operations centre for analysis by scientists and military personnel. By combining all of these technologies, it was possible to generate new information that could be fed to soldiers and their commanders in real time – significantly enhancing situational awareness, said the government.

Lt Col Nat Haden, SO1 Intelligence, Surveillance, Target Acquisition and Reconnaissance (ISTAR) Capability, Army Headquarters, said, “We need to develop practical solutions to a lot of our aspirations. It brings together our requirements as a user and DSTL as scientific advisers for the expert view. The strength of CUE is that we’re developing things with our key allies in the Five Eyes community.”

DSTL chief executive Gary Aitkenhead, added: “This is a fantastic example of our world-leading expertise at its best; our scientists working with our partner nations to develop the very best technology for our military personal, now and in the future.”

Internet of Business says

Military applications of robotics and AI have been increasingly visible this year.

Earlier this month, a new fleet of robots and drones designed to test for chemical agents, identify battlefield casualties, and provide 3D mapping services, was put through its paces in the UK by a team of troops, police officers, and scientists.

The two-week trials saw four different platforms tested in scenarios that simulated chemical attacks and leaks. The systems included robots that can ‘read’ and climb stairs and miniature drones that can rapidly assess hazards.

The trials were part of the British government’s Project Minerva, a £3 million programme to investigate the use of autonomous systems in contaminated zones.

Also in September, we reported that the US military is to spend $2 billion on developing AI for internal business purposes, to make its systems more efficient.

Meanwhile in June, Internet of Business revealed that the US soldiers are set to receive miniature personal reconnaissance drones. Imaging company FLIR was awarded a $2.6 million contract for the system.

But controversy over the weaponisation of AI and robotics has been growing too. While greater defensive capabilities for soldiers may be welcome, they arrive in the context of other countries developing similar technologies, creating an arms race towards autonomous weapons. For example, the US Loyal Wingman programme is developing a semi-autonomous fighter plane.

In this environment, the ethical challenges are not all immediately obvious. One is that while each step towards a technology outcome, such as battlefield autonomy, might seem reasonable in isolation (the need to protect troops on the ground), the long-term result may be morally questionable.

Speaking at a Westminster eForum event on AI policy in February, attended by UK government representatives, Richard Moyes, managing director of Article 36 (a non-profit working to prevent the “unintended, unnecessary, or unacceptable harm” caused by weapons systems), said:

“The more we see these discussions taking place, the more we see a stretching of the legal framework, as the existing legal framework gets reinterpreted in ways that enable greater use of machine decision-making, where previously human decision-making would have been assumed.”

Other speakers at the event warned that AI is undermining fundamental legal concepts, such as responsibility and criminal liability.

In other words, human moral agency is increasingly – and perhaps systematically – being reduced by each innovation in the field. This suggests that the long-term trend may be towards autonomous machines that decide to take human lives – without a human in the loop of that decision.

For technology companies that develop AI and machine learning systems, this represents a real moral hazard – as Google discovered in the summer, when an employee rebellion forced the company to pull out of the Pentagon’s Project Maven, when the contract comes up for renewal next year.

The US defence programme is developing AI to analyse drone footage for possible targets – a move not dissimilar to the technologies being tested at CUE.

In the wake of the contract withdrawal, Google published a set of ethical principles for future AI development. The company said it will no longer allow its technologies to be used in weapons or in “other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people”.

Also on the no-go list are systems that “gather or use information for surveillance, violating internationally accepted norms”, and those “whose purpose contravenes widely accepted principles of international law and human rights”.

However, Google has been criticised by Republicans in the US for withdrawing from the Pentagon deal while collaborating with China on a censored version of its search engine. Google’s work on that system, known as Dragonfly, has caused another mass employee rebellion at the company, but with little sign of the project being cancelled to date.

Former Google research scientist Jack Poulson went as far as writing to the Senate Committee on Commerce, Science, and Transportation this week to say that the company’s work with China directly contradicts its ethical statement.

Google CEO Sundar Pichai is to meet privately with Republican lawmakers on 28 September to discuss these issues, together with what the GOP regards as Google’s bias against conservative causes in search results.

Google has denied that any such bias exists and believes that the current administration is trying to force it to favour conservative causes, under threat of antitrust moves against the company.

Last week, enterprise software giant SAP also released a set of ethical guidelines for future AI development and set up an external advisory panel.

After all, in an autonomous battlefield, ultimate responsibility over life or death may rest with a software developer.