US army develops battlefield AI for troop safety

A team of US technologists has developed an artificial intelligence tool that it says can help soldiers identify potential threats around them during combat missions.

The researchers, who work at the US Army Research Laboratory in Maryland, claim that the system can help soldiers improve their skills 13 times faster than conventional technologies, potentially saving lives at the same time.

With the machine-learning-powered solution, soldiers are able to decipher battlefield information more quickly, such as identifying explosive devices or analysing images to locate potential threat zones.

Sophisticated tech

The research team

As part of the project, the scientists combined low-cost, lightweight hardware and an AI technique called collaborative filtering on a low-power Field Programmable Gate Array (FPGA) platform to develop more streamlined training for soldiers.

They claim to have achieved a thirteen-fold improvement in speed, compared to traditional multicore or GPU-based systems, which tend to be very expensive.

Dr. Rajgopal Kannan, who led the project, believes that the technology could eventually be integrated into combat vehicles, which – like other connected vehicles – are likely to be underpinned by cognitive technologies, he said.

Kannan worked with a group of artificial intelligence and machine learning experts from the University of Southern California on the project. Together, the researchers are now exploring other ways to accelerate and optimise tactical learning applications for combat operations.

The rise of AI

In a bid to identify and respond to increasingly complex threats, the US Army has been increasing its in-house research efforts into AI and machine learning.

According to the researchers, the army wants to “gain a strategic advantage and ensure warfighter superiority with applications such as on-field adaptive processing and tactical computing”.

Kannan confirmed that these projects involve “developing several techniques to speed up AI/ML algorithms through innovative designs on state-of-the-art inexpensive hardware”.

These could “become part of the tool-chain for potential projects,” he added. However, he didn’t specify any timescale for deployment.

In February, Kannan published a paper on a technique called “accelerating a stochastic gradient descent”, which he believes could become “ubiquitous” in future machine learning training algorithms. The study received the best paper award at the 26th ACM/SIGDA International Symposium on FPGA devices.

Internet of Business says

Military deployments of AI and robotics are an obvious application of these technologies, especially given the potential to both save lives and gain tactical advantage. However, the arms race towards increasingly automated battlefields presents a growing ethical and legal challenge.

Richard Moyes is managing director of Article 36, a not-for-profit organisation that works to prevent unintended, unnecessary, or unacceptable harm caused by weapons systems (the name refers to Article 36 of the 1977 Additional Protocol I of the Geneva Conventions).

One of the problems in this area is that each individual step towards a technology outcome might seem reasonable in isolation, but the end result may be morally questionable – in this case, a possible future of automated warfare in which machines may be programmed to kill without human intervention.

Speaking earlier this year at the Westminster e-Forum event on AI policy, Moyes said that the most pressing issue is the dilution of human control, and therefore of human moral agency.

He said, “The more we see these discussions taking place, the more we see a stretching of the legal framework, as the existing legal framework gets reinterpreted in ways that enable greater use of machine decision-making, where previously human decision-making would have been assumed.”

The answer, he said, is to create an obligation for “meaningful human control”.

“That doesn’t mean absolute control over every aspect of an attack,” he explained, “but there needs to be a sufficient form of human control for us too feel that a commander has a predictable understanding of what’s going to happen. And also that they can be reasonably held accountable for the actions that are undertaken.”

Additional reporting: Chris Middleton.

Nicholas Fearn:
Related Post