Stand by. Terminator-style nuclear weapons and systems are coming to a military near you.

Unmanned aerial vehicles, unmanned underwater vehicles and space planes are likely to be “the AI-enabled weapons of choice for future nuclear delivery,” a leading military think tank revealed during a recent seminar in Seoul.

AI, or artificial intelligence, enables faster decision-making than humans and can replace humans in the decision matrix at a time when leadership reacts too slowly – or is dead.

The Stockholm International Peace Research Institute, or SIPRI, released its report The Impact of Artificial Intelligence on Strategic Stability and Nuclear Risk Volume II; East Asian Perspectives in a forum hosted by the Swedish Embassy in Seoul.

The question is whether weaponized AI, through its deterrent or defensive purposes, is a risk ameliorator or whether by either bringing new or enhanced capabilities to new theaters of combat, and by obviating existing systems and weapons, it generates yet steeper risks.

Lora Saalman, the report’s editor, noted that AI is “a suite of technologies, not a technology.”

In terms of early-warning systems, Saalman noted, AI can identify signals and objects, deciphering massive sets of data faster than humans can, and make related predictions. In command and control, it can recognize patterns and enhance protection against cyber-attack. In cyber warfare, automation already exists and some sub-systems already operate autonomously.

AI offers information warriors new tools to manipulate (human) nuclear decision-makers – for example, it can generate faux orders or audio-visuals to trick operators, Saalman said. In terms of “hard” capabilities, it increases the onboard intelligence of airborne or waterborne drones, allowing them to better penetrate enemy defenses, making nuclear delivery both more maneuverable – and more autonomous.

Ultra deterrence

AI is emerging as a key adjunct to nuclear capabilities. The technology, when integrated with nuclear arm platforms, “… has the potential to offer nuclear-armed states the opportunity to reset imbalances in capabilities, while at the same time exacerbating concerns that stronger states may use A1 to further solidify their dominance,” the report reads.

“At the defensive level, integration of machine learning and autonomy into military platforms has a strong allure for countries with less-capable early warning systems, as well as smaller and weaker nuclear and conventional arsenals,” the report notes. Machines can undertake decisions “based on objective criteria to avoid the pitfalls of human error and to engage in faster anticipation, discrimination and response.”

The development and deployment of AI-enhanced platforms “have both been shaped by and have contributed to an interlocking series of national biases and assumptions that are driving AI integration and decision-making,” SIPRI noted.

One area where these biases and assumptions interlock is in “Dead Hand” – the autonomous capability of a state to retaliate even when its leadership has been wiped out.

“Dead Hand” can be an AI system rather than a mechanical system – as was the case with Russia’s nuclear deterrent at the end of the Cold War. 

AI can also be used as the software in a range of hardware assets that deliver “Dead Hand” responses.

An example given was China’s fielding of “swarm-enhanced unmanned platforms in sea, air and space for surveillance and even engagement” which suggest “a prevailing concern over the spread of US prompt and precise weaponry … that could result in decapitation of both its conventional and nuclear command and control and even arsenals.”

AI-enabled “Dead Hand,” then, could deter a stronger state from launching against a weaker state, as it ups the likelihood of mutually assured destruction. 

But other assets are downright alarming.

Underwater atomic drones

On the offensive front, strategic bombers and missile-armed submarines may be replaced by robots. Platforms such as unmanned underwater vehicles (UUVs), unmanned aerial vehicles (UAVs) and spaceplanes “… provide resiliency and survivability,” SIPRI noted. “These two aims indicate why such vehicles are likely to be the AI-enabled platforms of choice for future nuclear delivery.”

One such asset is a Russian nuclear-powered, nuclear-capable underwater drone “Poseidon.” Torpedo-shaped, 25-meters long, with a modular nuclear reactor, it can move at more than 100km/h at a depth of 1000 meters and is armed with cobalt weapons. Though not yet in service, in 2019 the Russian Navy ordered 30.

“Poseidon is a fantastic machine, but its consequences could be catastrophic,” said South Korean Hwang Il-soon, a nuclear engineer at the School of Mechanical Aerospace and Nuclear Engineering. “It is a kind of dirty bomb – it creates very strong alpha radiation.”

He was dismayed by the weapon as Russia is one of the world’s leaders in the peaceful use of nuclear energy, particularly in reprocessing spent fuel.

Saalman noted that there are indications that Poseidon may be deployed to loiter off US coasts.

“Weapons like Poseidon should be banned not just for their environmental impact but for their negative impact on strategic stability,” said Michiru Nishida, Special Assistant for Arms Control, Disarmament and Non-Proliferation Policy at Japan’s Ministry of Foreign Affairs. “But it is different for a country like Russia that sees it as a stabilizing factor.”

Space robot missile killers

Some A1-enabled weapons, while defensive in nature, could obviate current weapons and take the arms race into new fields.

Russian Vadim Kozyulin, of Moscow’s Pir Center, noted that there is little transparency about the US X-38B orbital test vehicle, a re-entry spacecraft that can land horizontally on runways, but “… it is a Pentagon project … so is designed for military purposes.”

Partial information suggests it could be a carrier of space-borne laser weapons that, within five years, may be able to counter ballistic missiles – the natural outcome of the 1990s Strategic Defensive Initiative (“Star Wars”). SDI was part-credited with collapsing the Soviet Union, which could not afford to counter the system.

“I would say this project is being closely followed by many countries,” said Kozyulin. “It could provoke an arms race in space.”

The Pentagon is developing a new strategy of deploying “ghost fleets” of surface and undersea drones – a doctrine is expected to appear in September, Kozyulin said. With these weapons posing a risk to nuclear submarines, “the Russian and Chinese navies will no longer be sure of their nuclear weapons’ reliability,” he said.

Who wants what?

While the United States leads the field in defense spending and defense research, China, according to Saalman, is promoting convergence between private industry, universities and the Peoples’ Liberation Army to develop weaponized AI. “Civil-military fusion has really take hold in China,” she said, noting that there is specific interest in machine learning, early warnings systems and guidance and targeting systems. Research is also underway into extra-large UAVs and on A1-controlled or assisted nuclear weapons delivery systems.

“Autonomy offers China the ability to disrupt the traditional strengths of the US,’ she said. “China is concerned about limitations, it is looking for cheaper ways to offset the imbalance.”

North Korea, she said, is promoting research into machine learning, speech and facial recognition systems, UAVs and A1-enabled cyber operations. “Algorithms generated by machine learning can be applied in large-scale attacks and pattern recognition,” she said. “And it can be used for data-poisoning disinformation.”

However, North Korea’s greatest interest in A1 may be in creating a nuclear “Dead Hand.”

“When we think about asymmetry and survivability … and when we think of a country with the most concerns about existential threat, that certainly applies to North Korea, which is especially paranoid about decapitations,” she said.

Risk or anti-risk?

In terms of stabilizing impacts, Saalman said that in East Asia, AI can enable faster and more accurate early warning, protect nuclear weapons against cyber-attack and foster more survivable delivery systems. It also provides new tools for arms control and enables more complex wargaming for decision-makers.

On the other side of the equation are destabilizing impacts. In East Asia, remote sensing via reconnaissance satellite networks has already undermined nuclear deterrence. It can also threaten the survivability of nuclear assets, so undermining confidence in deterrence, which “forces parties to rely on more survivable, but less controlled, platforms,” Saalman noted.

She noted that in the region, “the AI-enhanced arms race can become more prominent …India, China and the US all working on this.”

Moreover the new variables and complexities are prompting stance shifts. “This can cause states to alter their nuclear postures – for example, no first use,” she said. She noted that in 2019 India signaled greater conditionality in first-use terms, and in 2013, China’s defense white paper omitted first use.

“They have since put it back in,” Saalman said. “But how do you verify it?”

Human vs AI

The ultimate fear – one widely featured in science fiction – is whether weaponized AI could supplant or overrule humans.

In the nuclear energy industry, AI has been severely restricted. “We concluded that we cannot allow AI to control nuclear conditions – maybe minor conditions,” Hwang said. “You need a human operator.”

Regarding weaponry, the human is just part of a spectrum Saalman said. “It can be a human supervising, but at the farthest end there is no human – for example, in an autonomous weapons system. Where we should put the controls is a very important question.”

“I don’t know a single military or political leader who is ready to give control to AI at this moment, but one day they will, because information is collected by machines and processed by them and has to be translated into human language,” said Kozyulin. “Machines do that and provide scenarios on how to react to crises, so probably, the machines know better. But the last word has to be given to humans.”

However, he conceded that all depends on the algorithms installed in the machines; once approved by a human leader, those algorithms can enable an autonomous, decision-making machine.

Still, there are positives.

 “We have plenty of mechanisms to prevent crises nowadays: Countries are interconnected – economies, finance, cyberspace, transport and so on – so we are all in one boat and more connected,” Kozyulin said. He compared today to the 1963 Cuban missile crisis, which took 13 tense days to resolve. 

Moreover, satellite networks and cyber troops now offer more and better information to decision makers than ever before.

But due to the “time compression” of decision-making cycles, there will be such minimal space for humans to decide that humans “will probably give the final decision to AI,” he said.

Kozyulin noted that AI might be safer than humans. For example, 70% of air crashes are due to human, not technical, error. Then there are the vagaries of command protocols. “In the 1960s, commanders of Russian nuclear submarines had orders to act ‘according to the situation’ which did not mean anything!” he said.

Computerized early warning systems have already bought the world to the brink.

“Historically, early warning systems made three mistakes a week – only humans realized these were computer errors and stopped it happening,” he said. “Humans made the decisions, [they realized] only five missiles were not enough for an attack on the USSR.”

Currently, “intuition is not available to AI, but it is being developed – it  is experience, collected,” he said.

And there is a potentially very bright upside to militarized AI technologies.

Kozyulin suggested that if an appropriate international treaty could be crafted, AI could be embedded in competing nations’ early warning systems, providing autonomous monitoring, fail-safe and de-escalation mechanisms.