DARPA solicits research on AI tools that aid battlefield decision-making

Seasoned commanders may disagree on the path to take under combat.

Ben Wodecki

March 30, 2022

2 Min Read

Seasoned commanders may disagree on the path to take under combat.

The U.S. military is seeking research proposals on the development of algorithmic tools to aid personnel in making difficult battlefield decisions.

The Defense Advanced Research Projects Agency (DARPA), the military’s emerging tech R&D arm, announced its In the Moment program (ITM), which seeks research on systems that can “assume human-off-the-loop decision-making responsibilities in difficult domains, such as combat medical triage.” Other examples include first-response and disaster relief situations.

A difficult domain is defined as situations where trusted decision-makers disagree about a path to take, no right answer exists and “uncertainty, time-pressure, resource limitations, and conflicting values create significant decision-making challenges.”

The defense agency said seasoned commanders facing the same scenario on the battlefield may disagree on the action to take and that computationally representing the underlying human decision-making in dynamic settings may be an essential element to ensure algorithms would make trustworthy choices under difficult circumstances.

“ITM is different from typical AI development approaches that require human agreement on the right outcomes,” said Matt Turek, ITM program manager. “The lack of a right answer in difficult scenarios prevents us from using conventional AI evaluation techniques, which implicitly requires human agreement to create ground-truth data.”

“Baking in one-size-fits-all risk values won’t work from a DoD (Department of Defense) perspective because combat situations evolve rapidly, and commander’s intent changes from scenario to scenario,” Turek said. “The DoD needs rigorous, quantifiable, and scalable approaches to evaluating and building algorithmic systems for difficult decision-making where objective ground truth is unavailable.”

The program has four technical areas:

  • Developing decision-maker characterization techniques that identify and quantify key trusted decision-maker attributes in difficult domains

  • Creating a quantitative alignment score between a human decision-maker and an algorithm in ways that are predictive of end-user trust

  • Responsibility for designing and executing the program evaluation.

  • For policy and practice integration: providing legal, moral, and ethical expertise to the program; supporting the development of future DoD policy and concepts of operations (CONOPS); overseeing the development of an ethical operations process (DevEthOps) and conducting outreach events to the broader policy community.

ITM is set to run for three-and-a-half years across two phases – with a potential third that would be devoted to maturing the technology with a transition partner.

“We’re going to collect the decisions, the responses from each of those decision-makers, and present those in a blinded fashion to multiple triage professionals,” Turek said. “Those triage professionals won't know whether the response comes from an aligned algorithm or a baseline algorithm or a human. And the question that we might pose to those triage professionals is which decision-maker would they delegate to, providing us a measure of their willingness to trust those particular decision-makers."

About the Authors

Ben Wodecki

Assistant Editor

Get the newsletter
From automation advancements to policy announcements, stay ahead of the curve with the bi-weekly AI Business newsletter.