The Diplomat
Overview
Can [A]I Trust You? Artificial Intelligence in Man-Machine Military Teams
Flickr, deepak pal
Security

Can [A]I Trust You? Artificial Intelligence in Man-Machine Military Teams

How is trust generated in the first place, even among humans? 

By Abhijnan Rej

An increasingly clear theme to observers of the U.S. military’s attempts to incorporate artificial intelligence into military operations is that of trust: Finding ways through which humans and autonomous machines can seamlessly work together as a team, even when the final decision to engage a target may stay with the former, with the latter in crucial, but still ancillary, roles.

According to a Breaking Defense report on an Army Futures Command conference on November 16, the issue of human trust in – and impulse to micromanage – swarms of robots and drones as they complete the observe-orient-decide-act (OODA) loop on their own with the help of AI was a key issue in the minds of experts present there. In the words of one of them, “[When] we gave the capabilities to the AI to control [virtual] swarms of robots and unmanned vehicles, what we found, as we ran the simulations, was that the humans constantly want to interrupt them.”

The expert in question works on the System-of-systems Enhanced Small Unit (SESU) program – a Defense Advanced Research Projects Agency (DARPA) initiative that imagines future small combat units of 200-300 soldiers working with a swarm of low-cost robots and UAVs that are given a wide degree of latitude when it comes to target acquisition and engagement. According to DARPA, the SESU program seeks to build system-of-systems capabilities for these small units that would enable them to “destroy, disrupt, degrade, and/or delay the adversary's A2/AD [anti-access/area denial] and maneuver capabilities in order to enable joint and coalition multi-domain operations at appropriate times and locations” by allowing them to flexibly field and interface with robotic swarms. 

Speaking of the reluctance of human operators to let AI do its thing, another scientist at the conference was quoted by Breaking Defense as saying: “There is an unfortunate tendency for the humans to try to micromanage AI.” The scientist went on to add that “everybody will have to get used to the fact that AI exists. It’s around us. It’s with us, and it thinks and acts differently than we do.”

But that is easier said than done, as DARPA, as well as the U.S. Army and Air Force knows. The Army Combat Capabilities Development Command’s Army Research Laboratory (ARL) has made research around human-autonomous machine teaming a key research priority, with scientists affiliated with the ARL developing new design principles for autonomous machines that account for new factors arising out of human-machine interactions. As ARL scientist Jessie Chen said in an ARL Public Affairs interview about the lab’s research in the area, the key challenge is to design “a human-machine interface so the human can be engaged and maintain proper situation awareness of the environment.”

The operative word here is “proper” –  walking the tightrope between not too little, but not too much either. Too much intervention in the autonomous machine’s operation effectively makes the human the “supervisor” and not the “operator,” which is how Chen sees the proper role for humans in such pairs. This fine balance becomes all the more vivid when humans and machines are paired together for close combat against an adversary in a highly contested environment. 

For example, an aerial dogfight. DARPA’s Air Combat Evolution (ACE) program made news in August when an AI defeated a human operator in a simulated F-16 dogfight in an event organized under its aegis. Following the AlphaDogfight trials, however, ACE is now focusing on its core mission of “developing a protocol for teaching humans to trust autonomy and to develop more advanced human-machine symbiosis,” as director of DARPA’s strategic technology office Tim Grayson recently put it in a press release. The release announced the grant of contracts to five companies for them to test the performance of human-AI teams in simulated within visual range (WVR) aerial combat. It is important to note that during the last event of the AlphaDogfight trials in August, the Heron Systems AI that bested the human pilot deployed tactics that commentators noted as unrealistic (and therefore, unfamiliar). 

It is likely that the navy will also grapple with similar issues as it pursues the strategy of distributed lethality: fielding a large dispersed fleet of offensive survivable assets, mixing manned and unmanned/autonomous ones, in a denial environment. AI is a natural addition to implementing this strategy, where unmanned vessels, along with UAV swarms, coordinate a first wave of offensive action with broad guidance from human operators.

As things stand – and contra popular imagination – the U.S. military at this point is not seeking to field AI-powered autonomous weapons alone; rather the plan, across services, seem to be introducing mix of unmanned and manned assets, where humans and AI systems work together in joint multi-domain operations. 

So, in many ways, the question that DARPA is trying to address through SUSE or ACE, for example, cuts to the heart of military planning for the future: How does one enhance trust of humans in autonomous machines that are nominally under their control, but ones that are likely to produce surprising, unfamiliar, behavior?

At the end, this is a deep question that is not likely to be met with a technical answer, lying as it does, at the intersection of military strategy, technology, philosophy, and psychology. For example, one can start by asking: How is trust generated in the first place, even among humans? Do I trust X to do Y because I have repeatedly seen it doing things like Y in similar circumstances? If familiarity is the criterion for trust in human-AI systems, the whole program will be set for a crash landing. In fact, it is the very unfamiliar moves made by an AI system that gives it its edge, as AI champions observe – or induces hysteric fear in some, as the case may be. 

Or should one just go about trusting an AI system even when one is not sure why it does what it does, just like we don’t get involved in minute mechanical decisions our car engines make every time we start them in the morning, as one expert quoted in the Breaking Defense article put it?  

Bear also in mind that despite repeated exhortations that surprise and “friction” is par for the course in war, militaries tend to be conservative institutions. Long institutional memories fused with rigorous training that drills conformity could produce soldiers who would find it difficult to even partially delegate their safety – for example, though a “loyal wingman” in form of an AI-powered UAV – to a “thing” they don’t understand at a fundamental level. This is related to the core issue of whether humans can settle on a moral parity between an AI system and a human operator, which, in turn, leads to philosophically tricky notions such as empathy.

At the end, what is needed along with spectacular technological advances like AI systems is a commensurate effort to probe the human-machine relationships deeper – by militaries and civilians. The DARPA programs are a good first step; but they must be augmented by bringing in professionals not normally predisposed to condoning the practice of warfare: philosophers.

Want to read more?
Subscribe for full access.

Subscribe
Already a subscriber?

The Authors

Abhijnan Rej is security & defense editor at The Diplomat.

US in Asia
US Navy Secretary Proposes New Indo-Pacific Fleet
Security
Can the UK Achieve Its Naval Ambitions in the Indo-Pacific?
;