As Artificial Intelligence develops more and more human-like traits the tendency will grow for the anthropomorphization of AI systems to become automatic and unreflected. Image: Twitter

The US Defense Advanced Research Projects Agency (DARPA) is looking to develop AI algorithms that can make complex decisions in situations where two military commanders cannot agree on a decision.

Launched last month, DARPA’s In the Moment (ITM) program seeks to imitate the decision-making capabilities of trusted human leaders in situations where there is no agreed-upon right answer.

It aims to develop technology based on realistic and challenging decision-making scenarios, map its responses, and compare them to human decision-makers. Such technology would make quick decisions in stressful situations using algorithms and data, based on the premise that removing human biases may save lives.

Such technology may prove useful for triage in mass-casualty events, such as combat, terrorist attacks and disasters.

According to IMT program manager Matt Turek, the program’s development approach to AI is different in the sense that it does not require human agreement on the right outcomes. This mirrors difficult real-life situations wherein no right answers prevent the use of traditional AI evaluation techniques to create objective ground truth data.

The ITM program aims to create and evaluate algorithms that will aid military decision-makers in two situations, first for small unit injuries such as those sustained by Special Operations Forces, and second for mass casualty events such as terrorist attacks or natural disasters.

The program is expected to run for three and a half years in cooperation with the private sector, but no budget figures have been announced.

In a crisis environment, collecting data, processing information and making decisions are extremely difficult tasks. In these situations, AI can potentially overcome human limitations but also pose ethical problems while reducing human autonomy, agency and capabilities.

However, one of the most glaring of these problems is that the use of such technology can be tantamount to having a machine decide who lives or dies in a mass casualty event. Such a dilemma can be exacerbated by AI bias derived from training data which may reflect certain human preferences.

The use of AI also gives these biases scientific credibility, making it seem as though its predictions or judgments have objective status, where in fact it simply acts according to its maker’s design parameters.

This may be an uncomfortable prospect considering the life-and-death situations that AI is increasingly deployed. It also raises questions about whether some soldiers would be prioritized more than others based on biased AI training data.

Such AI could possibly access confidential information about individuals in making decisions, opening privacy and surveillance concerns. At the same time, it is not clear whether commanders and soldiers deployed in the heat of battle would follow AI recommendations.

And then there is the potential dilemma of accountability with AI, especially when its decisions lead to injuries or fatalities.

These problems stem from the basic fact that human values, morality and ethics are not hard-coded into AI. However, AI can be used to clarify and strengthen the value systems used by military organizations, rather than being used as a substitute for key decision-makers. 

First, AI can reinforce military decision-making theoretical frameworks with its algorithms, providing a positive bias effect instead of a discriminatory negative bias. To do so, military leaders would have to constantly re-evaluate the moral basis of their decisions to supply AI with a data set that is consistent with their professed values.

Second, AI does not become emotional in the heat of battle. It is not affected by stress, fatigue and poor nutrition. As such, AI can be used as a moral adviser when human judgment becomes impaired due to physical and emotional factors.

Third, AI can collect, process and disseminate information faster and on a scale much larger than what human beings are capable of. AI can explore or uncover variables that may be too numerous or complicated for unaided human cognition, which may have unforeseen impacts of subsequent decisions.

Lastly, AI can extend the time it takes to make ethical decisions. In the context of the ITM project, AI can optimize the application of medical care by correlating individual cases with a larger operational and strategic picture in real-time, optimizing the allocation and delivery of resources much faster than traditional triage methods.

Despite these advantages, perhaps no AI system can match human tenacity, first-hand situational awareness and overall survival instinct.