Series| Uprising

Killer robots will fight our wars: Can they be trusted?

The wars of the future will be fought by swarms of robots. Can we trust them to make moral decisions on behalf of humanity?
Watch on YouTube

With the development of autonomous weapons underway, mankind is on the brink of witnessing a new type of warfare. Although the idea of killer robots may sound terrifying, experts are working to ensure that our worst, science-fiction nightmares don’t become a reality.

Author of the award-winning book, Army of None: Autonomous Weapons and the Future of War, Paul Scharre is also the Director of the Technology and National Security Program at the Center for a New American Security.

Scharre possesses an extensive knowledge and background in autonomous weapon policy establishment and combat experience gained while serving in the Army’s 3rd Ranger Battalion as a special operations reconnaissance team leader.

“When people worry about autonomous weapons,” Scharre says, “one of their fears would be robots running amok and killing civilians. No one’s building that. That’s the good news.”

He continues, “There are many situations in war that are gray, and that’s a real challenge, when it comes to autonomous weapons.” Scharre suggests that answering questions, such as whether or not a robot could make morally sound decisions, will help us find a humane way to move forward with the advancement of this new technology. 

In Defense of Autonomous Weapons

It might be news to some, but the US military already uses autonomous weapons — not necessarily killer robots, but technology that allows for unmanned weapons. In fact, the military used remotely-controlled vehicles as early as WWII.

Today, the military has unmanned combat aerial vehicles, also known as drones, in addition to a range of missile guidance technology and automated missile defense systems. These warfare technologies were developed to be more effective and safe, for both civilians and soldiers.

The idea that weapons become increasingly dangerous as they grow more advanced is inherently flawed. Advancing weapon technology does result in more powerful military tools. Thanks to weapon automations, however, they have also become much more precise.

Before the development of missile guidance technology, a type of weapon autonomy which allows the military to direct a missile to a very specific location and even trigger detonation on command, militaries needed to drop blankets of bombs in order to increase their chances of hitting a target. These tactics led to the destruction of countless unnecessary targets.

Between September 1940 and May 1941, before the age of missile guidance, the German bombings of British cities known as “the Blitz” resulted in about 43,000 civilian casualties and injured another 139,000. And the Blitz is just one example. With missile guidance technology, people are still getting killed in warfare, but we’re preventing countless civilian deaths.

With similar goals, drones were developed to eliminate the need for soldiers to physically access dangerous places, and automated missile defense systems could potentially save the lives of millions should they ever be needed.

How Much Should We Trust Killer Robots? 

Artificial intelligence has completely transformed our understanding of what “autonomous” actually means. With respect to weapons systems, three levels of autonomy have been recognized.

Three Levels of Autonomy

1. Semi-autonomous weapons, with which a human remains in control.

2. Supervised autonomous weapons, with which a human oversees a task that occurs too quickly for the human to react.

3. Fully autonomous weapons that function without humans when there’s no means of communication.

The hypothetical killer robot running amok would fall into the third category of autonomous weapons. The question on everyone’s minds is whether or not these types of killer robots should even be allowed to kill.

The issue with completely taking humans out of the autonomous loop, and giving robots the right to kill, boils down to the importance of our humanity. Our ability to feel and evaluate complex situations, using human judgement and moral understanding, is not something that an artificial intelligence can easily learn.

Warfare is very complex and rife with unpredictable, chaotic situations that require a human’s unique faculty for sound judgement. Drawing from his experiences in Afghanistan, Scharre provides a real-life example of a situation in which a fully autonomous, killer robot would have likely killed an innocent child.

Scharre was part of an Army Ranger sniper team when a little girl, who they estimated was five or six years old, walked a wide circle around them. As she watched them, they heard a radio beeping on her, and not long after she left, a group of Taliban fighters arrived.

Scharre says that neither he nor his fellow soldiers would have ever considered shooting the little girl. The rules of warfare do not set an age for a legal combatant. Rather, combatants are defined by their behavior. Even though the girl was a child, the rules of warfare would have said she was legally a combatant because she was actively working for the enemy as a scout.

In that same situation, Scharre surmises that a robot would have recognized the girl as a combatant and fired without the human ability to discern the moral subtleties of the situation.

Even in combat, with clearly outlined rules of warfare, choosing to pull the trigger is rarely a black-and-white decision. Any slightly nuanced situation could lead military robots to commit war crimes or use excessive force. The amount of discernment required to assess different levels of threat isn’t going to be entrusted to a robot anytime soon.

The Next Generation of Autonomous Weapons 

Scharre states that the next generation of autonomous weapons probably won’t be killer robots, walking on two legs. They’re more likely to come in the form of swarming drones, which will fly like massive flocks of birds.

The drones won’t be individually or directionally controlled, but given tasks to work together to fulfill. They might be used to secure a location or perform reconnaissance. They could be designed to fight other swarms, removing humans from combat altogether. Whatever the future of war holds, it’s a certainty that artificial intelligence will play a key role in defining it.

For more from our series about the latest in robotics news and tech, subscribe to Freethink.

Subscribe to Freethink for more great storiesĀ