Home WebMail Friday, November 1, 2024, 07:40 AM | Calgary | -4.0°C | Regions Advertise Login | Our platform is in maintenance mode. Some URLs may not be available. |
Posted: 2016-06-03T16:51:04Z | Updated: 2016-07-08T16:03:12Z The Ethical Case For Killer Robots | HuffPost Life

The Ethical Case For Killer Robots

It seems counterintuitive, but these weapons could save innocent people.
|
Open Image Modal
AMEER ALHALBI/AFP/Getty Images
Syrians help a wounded youth after an airstrike on the al Fardous rebel-held neighborhood of Aleppo.

How can we improve the act of killing? And should we?

As we enter the era of artificial intelligence , some argue that our weapons should be smarter to better locate and kill our enemies while minimizing risk to civilians. The justification is not so different than the one for smart thermostats: Data and algorithms can make our technology more efficient, reducing waste and, theoretically, creating a better planet to live on.

It's just that "reducing waste" means something very different when you're talking about taking lives as opposed to cooling your bedroom.

But if war and killing are inevitable, it makes sense to make our weapons as precise as possible, argues Ronald Arkin , a well-known robotics expert and associate dean at Georgia Tech. He's written extensively on the subject -- including in a 256-page textbook  -- and concludes that while he's not in favor of any weapons per se, he does believe robots with lethal capacity could do a better job protecting people than human soldiers.

He's not without his opponents.

"This entire space is highly controversial," Arkin conceded with a chuckle in a recent interview with The Huffington Post.

That's partially because these robots have yet to be defined. But don't imagine the Terminator. Think instead of drones that can pilot themselves , locate enemy combatants and kill them without harming civilians. The battlefield of the near future could be filled with these so-called lethal autonomous weapons (commonly abbreviated "LAW") that could be programmed with some measure of "ethics" to prevent them from striking certain areas.

"The hope is that if these systems can be designed appropriately and used in situations where they will be used appropriately; that they can reduce collateral damage significantly," Arkin told HuffPost.

Open Image Modal
U.S. Navy photo courtesy of Northrop Grumman/Alex Evers/Handout via Reuters
The Triton unmanned aircraft system completes its first flight from the Northrop Grumman manufacturing facility in Palmdale, California, in 2013. The Triton is designed for surveillance , not killing.

That might be an optimistic view. Critics of autonomous weapons worry that once the robots are developed, the technology will proliferate and fall into the wrong hands. Worse, that technology could be a lot scarier than the relatively large drones of today. There could be LAW devices programmed with facial recognition that relentlessly seek out targets (or even broader categories of people). And as journalist David Hambling described in his book Swarm Troopers , advances in technology could allow these robots to become incredibly small and self-sufficient, creating flocks of miniature, solar-powered drones that in effect become weapons of mass destruction far beyond the machines Arkin imagines.

This isn't simple stuff. But it also isn't theoretical. The technology is already being developed, which is why experts are calling for an international agreement on its functionality and deployment to happen, well, yesterday.

To learn a bit more about the case for these weapons as lifesaving tools, HuffPost got Arkin on the phone.  

Your premise assumes a lot about how consistent our definition of warfare is. Is it realistic to expect that our current model of nations warring with other nations will stay the same? Even now, the self-described Islamic State has an entire strategy that revolves around terrorizing and killing civilians.

That begs the question of whether international humanitarian law will hold valid in the future, and just war theory , for that matter, which most civilized countries adhere to. There are state actors and non-state actors who stray outside the limits or blatantly disregard what is prescribed in international humanitarian law, and that's what warfare is at this point in time. There have always been war crimes since time immemorial. Civilians have been slaughtered since the beginning of warfare. We've been slaughtering each other since all recorded history.

If you take a total war point of view and a scorched earth policy to conducting warfare, it doesn't matter if you have robots or not.

- Ronald Arkin, robotics expert

So, the real issue is, will warfare change? And the answer is yes. The hope is that if these systems can be designed appropriately and used in situations where they will be used appropriately; that they can reduce collateral damage significantly. But that's not to say there won't be people who use them in ways that are criminal, just as they use human troops in criminal ways right now -- authorizing rape, for example, in Africa. 

The issue fundamentally is, if we create these systems -- and I feel they inevitably will be created, not only because there's a significant tactical advantage in creating them, but also because, in many cases, they already exist -- we must ensure that they adhere to international humanitarian law.

If you take a total war point of view and a scorched earth policy to conducting warfare, it doesn't matter if you have robots or not. You can drop nuclear weapons on countries and destroy them at this point in time if you choose to do that.

Right.

The alternative is to say -- and I'm not averse to this, either -- we will never, ever conduct research into military weapons again. And then, of course, you lose the asymmetric advantage one nation has over another, including the United States, with advanced weapons technology that can not only execute missions more effectively, but if done appropriately, do it without a concomitant loss in civilian life.

It's not to say there aren't risks. I'm well aware there are risks with moving this forward. The point is, if we can build into these systems a restriction so that they cannot be used inappropriately that provides strong and hard evidence of criminality, it will make it far easier to convict war criminals than it is with trying to inspect the intentions of the human mind.

So, you're talking about programming a sort of ethical model into the machine?

Yes, that's the work I did for the Army Research Office  from 2006 to 2009, looking at a proof of concept prototype system as a means to enforcing the international humanitarian law in relatively narrow, bounded circumstances.

Systems never have a full moral capability or reasoning -- at least not in my lifetime. But we can assist them in complying with international humanitarian law just as we instruct our soldiers. You don't give them a weapon, send them out in the field and say "figure it out, do what's right." You tell them this is acceptable and this is unacceptable and you will be punished if you stray outside of these boundaries. We need to establish these boundaries for robotic weapons, too.

If these weapons are to be developed, what will they look like? What are we actually, realistically, talking about here?

Let's talk about what we have right now: Reaper and Predator drones and the like. Imagine you're trying to use these weapons systems against a nation or an actor that is technologically sophisticated. You send them out on a mission and you encounter electronic countermeasures or jamming, and it breaks the link back to wherever the human operators happen to be. These systems are basically on their own.

Now what they do is they either circle in the air until a communications link is re-established, or they return back to base. But suppose this mission is vital, and you need to carry out this operation anyway. You can send an artillery shell off and just explode in a particular area. But you may want the drones to be able to assess the situation after they've been given the authority to carry out the mission, even when the link goes down, to be able to use target recognition and engage a target on its own without asking a human being for further confirmation.

If it is ethically inappropriate, I believe these systems should be granted the authority to not carry out a mission.

- Ronald Arkin, robotics expert

You also want to grant it the authority to disengage. And this is the thing most people forget. These systems should have the authority to not engage a target as well. Today, you send a cruise missile at a bridge and it takes 30 minutes to get to its target. This missile gets launched, but the situation on the ground can actually change dramatically in 30 minutes. There could be a line of school buses on that bridge. The system is traveling so fast that you can't call it back home. You may have to grant the authority for the missile to self-destruct on its own and basically say, "I'm not going to do that."

That's part of being ethical as well. It's not always picking out the right targets, it's recognizing that you've been tasked with a mission that is inappropriate, unethical or illegal and saying, "I'm not going to do that." That brings up the scariness of "2001: A Space Odyssey" -- "I'm sorry, Dave, I can't do that." But if it is ethically inappropriate, I believe these systems should be granted the authority to not carry out a mission. 

There's another example that's straightforward and could be used today. There was a situation with the Taliban in a cemetery, 150 of them, high-ranking officers at a funeral. A Predator with hellfire missiles was hovering over the area, and the human operators wanted to engage and take out all of those warriors all at once. Well, they called all the way up to the Pentagon and were told they couldn't do it because it violated the rules of engagement. 

Right now, we can designate no-kill zones to these systems that are GPS-located, and if you try to engage a target within a no-kill zone, the system will simply refuse to fire. That could be done tomorrow.

So what is the most important concept for someone who has never considered autonomous weapons before?

The most important concept is, how can we better protect noncombatant life if we are going to continue in warfare. Nobody is building a Terminator. No one wants a Terminator as far as I know. But think superior, precision-guided munitions with the goal of saving lives.

I would also say the discussion we're having right now is vitally important. And far more important than my own research. We need to do it out of rationality and not out of fear. And we need to find ways to come up with appropriate guidelines and regulations to, where appropriate, restrict the use of this technology in a situation where it's ill-suited.

I believe that we can save lives in using this technology over existing modes of warfare. And if we achieve that, it's fundamentally a humanitarian goal.

This interview has been edited and condensed.

Your Support Has Never Been More Critical

Other news outlets have retreated behind paywalls. At HuffPost, we believe journalism should be free for everyone.

Would you help us provide essential information to our readers during this critical time? We can't do it without you.

Support HuffPost

HuffPost Shoppings Best Finds

MORE IN LIFE