It was at the Association of the United States Army (AUSA) conference in Washington, DC, last fall, that I attended a press gathering featuring two US Army generals — and one of the topics happened to be autonomous weapons.
What struck me about it, was not that these advanced weapons were coming down the pike — that is an absolute certainty — rather, the discussion was around how much leeway to give them.
There was concern that too much leeway could lead to disaster — that old problem of friendly fire. Things have to be worked out on that front, they said.
It was apparent that we are not quite there yet, but make no mistake, they are coming and they will change the way wars are fought.
There appears to be increasing agreement among some countries that fully autonomous weapons should be banned to avoid the creation of such killer robots, the new report warns.
It would be “unacceptable” if weapons systems are able to select and kill targets without human oversight, the researchers warn.
The research by Human Rights Watch said 30 countries had now expressed a desire for an international treaty introduced which says human control must be retained over the use of force, The Independent reported.
The new report, “Stopping Killer Robots: Country Positions on Banning Fully Autonomous Weapons and Retaining Human Control,” reviews the policies of 97 countries that have publicly discussed killer robots since 2013.
Mary Wareham, arms division advocacy director at Human Rights Watch and coordinator of the Campaign To Stop Killer Robots, said urgent international action is needed on the issue as technology such as artificial intelligence continues to spread.
“Removing human control from the use of force is now widely regarded as a grave threat to humanity that, like climate change, deserves urgent multilateral action,” she said.
“An international ban treaty is the only effective way to deal with the serious challenges raised by fully autonomous weapons.
“It’s abundantly clear that retaining meaningful human control over the use of force is an ethical imperative, a legal necessity and a moral obligation.
“All countries need to respond with urgency by opening negotiations on a new international ban treaty.”
Leaders in the fields of AI and robotics, including Elon Musk and Google DeepMind’s Mustafa Suleyman, have signed a letter calling on the United Nations to ban lethal autonomous weapons.
The report suggests that while a number of international organizations and countries have backed the campaign, a small number of military powers, including the United States and Russia, have “firmly rejected proposals” around regulation.
Both countries, along with China and Israel, are pouring R&D funding into the development of artificial intelligence on the battlefield, in the air and on the seas — and it is highly doubtful that tsunami of scientific research can be stopped.
According to VOX, experts in machine learning and military technology say it would be technologically straightforward to build robots that make decisions about whom to target and kill without a “human in the loop.”
And as facial recognition and decision-making algorithms become more powerful, it will only get easier.
“When people hear ‘killer robots,’ they think Terminator, they think science fiction, they think of something that’s far away,” Toby Walsh, a professor of artificial intelligence at the University of New South Wales and an activist against lethal autonomous weapons development, said. “Instead, it’s simpler technologies that are much nearer, and that are being prototyped as we speak.”
That weapon in question would not look like the Terminator. The simplest version would use existing military drone hardware, VOX reported.
But while today’s drones transmit a video feed back to a military base, where a soldier decides whether the drone should fire on the target, with an autonomous weapon the soldier won’t make that decision — an algorithm would.
According to VOX, the algorithm could have a fixed list of people it can target and fire only if it’s highly confident (from its video footage) that it has identified one of those targets. Or it could be trained, from footage of combat, to predict whether a human would tell it to fire, and fire if it thinks that’s the instruction it would be given. Or it could be taught to fire on anyone in a war zone holding something visually identifiable as a gun and not wearing the uniform of friendly forces.
Military analysts cite four arguments for autonomous weapons, VOX reported.
- The fact that there is no human in the loop, means you can launch as many as you like — this opens up a world of new capabilities.
- Current drones need to transmit and receive information from their base, making them subject to potential jamming — autonomous drones could act freely.
- Weapons can be programmed to know the laws of war — and accordingly will countermand any order from a human that violates those laws.
- And lastly, the claim that killer robots are more ethical. Humans commit war crimes, deliberately targeting innocents or killing people who’ve surrendered. Robots, by contrast, follow exactly their code — they never get angry or seek revenge.
There is, of course, one serious drawback — fully autonomous weapons will make it easier and cheaper to kill people — a delicate problem all by itself in the wrong hands.
Lethal autonomous weapons also seem like they’d be disproportionately useful for ethnic cleansing and genocide — a frightening possibility.
Weapons used on the battlefield can also be scavenged — and, as in the Terminator movies — could be turned against Western countries.
According to VOX, the situation with lethal autonomous weapons is just one manifestation of a much larger trend.
AI is making things possible that were never possible before, and doing so quickly, such that our capabilities frequently get out ahead of thought, reflection and public policy.