The autonomous system, Origin, prepares for a practice run during the Project Convergence capstone event at Yuma Proving Ground, Arizona. (U.S. Army photo by Spc. Carlos Cuebas Fantauzzi).

Two robot Terminators trundled across the Yuma Desert, in search of the enemy.

Like human troops, the machines took turns covering each other as they advanced.

One robot would find a safe spot, stop, and launch the tethered mini-drone it carried to look over the next ridgeline while the other bot moved forward; then they’d switch off.

They are part of the Army’s Project Convergence exercise on future warfare — but they don’t look like the movie Terminators.

More like high-tech golf carts, with weapons and sensors, Sydney J. Freedberg Jr. of Breaking Defense reported.

Their objective was a group of buildings on the Army’s Yuma Proving Ground, a simulated town for urban combat training.

As one robot held back to relay communications to its distant human overseers, the other moved into the town — and spotted “enemy” forces.

 With human approval, the robot opened fire. Whammo! — target destroyed.

Luke Travisano, engineer with Robotic Research LLC, conducts a test run of the autonomous system Pegasus, during the Project Convergence capstone event at Yuma Proving Ground, Arizona. (U.S. Army photo by Spc. Carlos Cuebas Fantauzzi).

Then the robot’s onboard Aided Target Recognition (ATR) algorithms identified another enemy, a T-72 tank. But this target was too far away for the robot’s weapons.

So the bot uploaded the targeting data to the tactical network and – again, with human approval — called in artillery support.

“That’s a huge step, Sydney,” said Brig. Gen. Richard Ross Coffman, the Project Convergence exercise director. “That computer vision … is nascent, but it is working.”

Algorithmic target recognition and computer vision are critical advances over most current military robots, which aren’t truly autonomous but merely remote-controlled: The machine can’t think for itself, it just relays camera feeds back to an operator, who tells it where to go and what to do, Breaking Defense reported.

That approach, called teleoperation, does let you keep the human out of harm’s way, making it good for bomb squads and small-scale scouting. But it’s too slow and labor-intensive to employ on a large scale.

If you want to use lots of robots without tying down a lot of people micromanaging them, you need the robots to make some decisions for themselves — although the Army emphasizes that the decision to use of lethal force will always be made by a human, Breaking Defense reported.

So Coffman, who oversees the Robotic Combat Vehicle and Optionally Manned Fighting Vehicle programs, turned to the Army’s Artificial Intelligence Task Force at Carnegie Mellon University.

U.S. Army autonomous weapons system “Origin”, self-adjusts firing positions during testing operations at Project Convergence 20, Yuma Proving Ground, Arizona, August 25, 2020. (U.S. Army photo by Pvt. Osvaldo Fuentes).

“Eight months ago,” he told me, “I gave them the challenge: I want you to go out and sense targets with a robot — and you have to move without using LIDAR.”

LIDAR, which uses low-powered laser beams to detect obstacles, is a common sensor on experimental self-driving cars. But, Coffman noted, because it’s actively emitting laser energy, enemies can easily detect it, Breaking Defense reported.

So the robots in the Project Convergence experiment, called “Origin,” relied on passive sensors: cameras.

That meant their machine vision algorithms had to be good enough to interpret the visual imagery and deduce the relative locations of potential obstacles.

That may seem simple enough to humans, but it’s a radical feat for robots, which still struggle to distinguish, say, a shallow puddle from a dangerously deep pit, Breaking Defense reported.

That’s where Aided Target Recognition comes in.

Recognizing targets is another big challenge. Sure, artificial intelligence has gotten scarily good at identifying individual faces.

With new data collection capabilities, the “Origin” provides soldiers on the frontlines with precise terrain descriptions. Note the tethered mini-drone at back left of the vehicle. (U.S. Army photo by Pvt. Osvaldo Fuentes).

But the private sector hasn’t invested nearly as much in, say, telling the difference between an American M1 Abrams tank and a Russian-made T-72, or between an innocent Toyota pickup and the same truck sporting a heavy machine-gun in the back, Breaking Defense reported.

“Training algorithms to identify vehicles by type, it’s a huge undertaking,” Coffman said.

“We’ve collected and labeled over 3.5 million images” so far to use for training machine-learning algorithms, he said – and that labeling requires analysts to look at each and every picture and tell the computer what it was.

But each individual robot or drone doesn’t need to carry those millions of images in its own onboard memory: It just needs the “classifier” algorithms that result from running the images through machine-learning systems.

Because those algorithms themselves don’t take up a ton of memory, it’s possible to run them on a computer that fits easily on the individual bot, Breaking Defense reported.

“We’ve proven we can do that with a tethered or untethered UAV. We’ve proven we can do that with a robot. We’ve proven we can do that on a vehicle,” Coffman said. “We can identify the enemy by type and location.”

In other words, the individual robot doesn’t have to constantly transmit real-time, high-res video of everything it sees to some distant human analyst or AI master brain.

But before the decision is made to open fire, Coffman emphasized, a human being has to look at the sensor feed long enough to confirm the target and give the order to engage.

“Could that be done automatically, without a human in the loop?” he said. “Yeah, I think it’s technologically feasible to do that. But the United States Army [is] an ethics-based organization. There will be a human in the loop.”

It also looks like one could put a set of golf clubs in the back.