Series| Hard Reset

Should war robots have  “license to kill?”

War is changing. As drones replace snipers, we must consider the ethics of machines making life or death decisions.

Perched on a mountaintop, straddling the border of Afghanistan and Pakistan, Paul Scharre’s Army Ranger sniper team was scoping out routes Taliban fighters were suspected of using to move between the countries. 

They had gotten into position under the cover of darkness; they were spotted early in the morning by a farmer, Scharre writes in Army of None, his book about autonomous weapons and the future of war.

“Now, what we expected to happen was that some fighters would come to attack us,” Scharre tells Freethink. “And we were ready for that.”

Sure enough, someone does head out of the village and moves towards Scharre’s position. But this is not a Taliban fighter; down the sights of his sniper rifle is a young girl, perhaps just five or six, with two goats in tow.

She began “herding” the goats, Scharre writes, taking long, slow loops and looking at the sniper team’s position. A radio she is carrying chirps.

“They sent a little girl to scout on our position,” Scharre says.

Scharre does not pull the trigger, of course. It was never an option, even under the extremely dangerous circumstances. But what if a human being wasn’t there to make that choice? What if an autonomous weapons system had spotted the girl, and acted according to its programming? 

“If you programmed a robot to comply perfectly with the laws of war,” Scharre says, “it would have killed this little girl.”

“Is it acceptable for a machine to make life and death decisions in war?”

Paul Scharre

The Future of Conflict

Autonomous weapons aren’t science fiction, aren’t some distant concern, aren’t the nightmare-inducing, shiny skeleton Terminators or even the bumbling battle droids (roger, roger) of the Star Wars prequels.

They’ve existed, with varying levels of autonomy, for hundreds of years, and they are getting more sophisticated all the time. Today, powered with deep learning AI and advanced computer vision technology, autonomous weapons can exhibit a range of “freedom” that has never been achieved before in warfare. 

An autonomous weapons system housed on a drone may have seen combat in 2020 in Libya, and in 2020, Israeli agents assassinated Iran’s top nuclear scientist using a robotic sniper.

Their proponents believe that if autonomous weapons are more accurate, more precise, and kill only their intended targets, it may mean less death and less casualties, both for the militaries deploying them and the civilians caught in the crossfire. It may also mean removing the psychological and emotional burden of death from more people.

Opponents believe that to expect a machine to be able to handle the chaotic environment of a battlefield, to operate perfectly in a crowded city, to not suffer one of those weird errors AI often makes — with deadly consequences — is unrealistic and dangerous. 

And they argue that, far from a benefit, relieving human beings from the moral and tactical burden of deciding when to kill a person — deliberately losing control of a weapon — is an ethical nightmare that cannot be allowed to happen.

Killer robots change the relationship between people and technology by handing over life and death decision-making to machines.

The Campaign to Stop Killer Robots

It’s that final concern that separates the autonomous weapons of today from those we’ve used before: the ability to actively hunt down and attack targets, with no final human say on pulling the trigger.

It’s a question at the heart of the debate around autonomous weapons, as militaries develop them and arms control and humanitarian organizations, like the Campaign to Stop Killer Robots, seek a ban.

“Is it acceptable,” Scharre asks, “for a machine to make life and death decisions in war?”

A Brief History of Autonomous Weapons

“Autonomous weapons” conjures up images of Predator drones and robot dogs, but the use of weapons that can act on their own is actually centuries old.

Mines, both on land and at sea, are passive autonomous weapons. Once they have been set, they do not need to be told to kill. Instead, they have sensors designed to detect the presence of someone or something, which triggers them to do their thing. They highlight that autonomous weapons do not need “intelligence.”

What mines do have, however, is patience; they can lie in wait for decades, killing any who cross their path. They do not know a war is over. 

Army of None also recounts autonomous weapons systems of the past that seem advanced even today. Militaries have been developing weapons which can wander around a certain area seeking targets — known as “loitering munitions” — for decades.

In the 1980s, the U.S. Navy designed and tested a loitering missile called the Tomahawk Anti-Ship Missile (TSAM). 

The idea was that you could launch the TSAM over the horizon, where you suspected Soviet ships would be. Once it arrived at its destination, the TSAM would fly a search pattern, looking for Soviet radar signals. If it found one, it would attack the source. 

The TSAM was taken out of service in the 1990s, and never used in combat, but it was the first fully operational autonomous weapon of the modern era, Scharre writes.

Another loitering munition that is in use is the Israeli-made Harpy. This drone circles around a specific area and hones in on signals used by radar devices, which it then attacks. According to Scharre, the Harpy can stay airborne for two and a half hours, covering 500 km of ground — and it can do so without a human in the loop.

Today’s autonomous weapons can exhibit a range of “freedom” that has never been achieved before in warfare. 

And it is the humans — not the technology — that are actually at the heart of the fight for the future of war. Could autonomous weapons reduce the burden on human life, by avoiding mistakes? They may lead to less trauma, psychological and physical, for people fighting the wars. Or would they make war too easy, thus making it more common? 

“If we had a war, and no one felt bad about the fact that we were killing other human beings, what would that say about us?” Scharre asks.

“What would it say about us if no one slept uneasy at night afterwards?”

Which leads to the question: Should autonomous weapons be banned — and can they be?

To Build or To Ban

The proponents of autonomous weapons, including powerful militaries and major defense contractors, see great upside in them. 

Writing in Military Review, the Army’s professional journal, Amitai Etzioni and Oren Etzioni, the CEO of Seattle’s Allen Institute for AI, lay out some of the primary advantages of autonomous weapons systems for those deploying them.

Autonomous weapons are “force multipliers.” This means that a commander can use fewer people for a mission, as the capabilities of each person are enhanced. You could imagine one drone operator destroying an entire convoy, rather than dispatching dozens or hundreds of soldiers with anti-vehicle weapons.

That technology could be deployed in places or conditions too dangerous for conventional troops, and, of course, fewer soldiers at risk means fewer soldiers killed.

Some military experts and roboticists believe autonomous weapons have moral benefits, as well.

The Etzionis quote roboticist Ronald C. Arkin, who argues that “they do not need to be programmed with a self-preservation instinct, potentially eliminating the need for a ‘shoot-first, ask questions later’ attitude.” Their orders would not be colored by emotions like fear, they could be used to monitor and report human war crimes without bias, and removing humans from high-stress combat zones could help preserve their mental health.

But many of those benefits are based on the assumption that autonomous weapons work correctly. Already, Teslas are befuddled by road conditions, and phone-unlocking cameras have problems with certain skin colors. Could those systems be trusted to work with much higher stakes?

The argument for banning autonomous weapons outright has been taken up by almost 100 countries and the United Nations.

“Autonomy in weapons systems is a profoundly human problem,” the Campaign to Stop Killer Robots lays out in their argument  for banning them. 

“Killer robots change the relationship between people and technology by handing over life and death decision-making to machines.”

The technology behind autonomous weapons can carry the same biases as the people who designed them, the campaign maintains. They may also lower the threshold for war, by reducing the human cost to starting one — a cruel twist.

Could autonomous weapons reduce the burden on human life, by avoiding mistakes? Or would they make war too easy, thus making it more common? 

Autonomous weapons could also remove human judgment, understanding, and accountability in war. An autonomous drone may have fired on that young Afghani girl — and it could have been justified by its programming in doing so. 

War is inherently too complicated, and civilians too often in the line of fire, to hand control fully to machines, opponents argue.

“There are many situations in war that are gray,” Scharre says. “And that’s a real challenge when it comes to autonomous weapons. How would a robot know the difference between what’s legal, and what’s right?”

It’s a question which needs to be answered soon. The future of warfare is already here.

We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at tips@freethink.com.

Exit mobile version