Home WebMail Friday, November 1, 2024, 07:32 AM | Calgary | -4.0°C | Regions Advertise Login | Our platform is in maintenance mode. Some URLs may not be available. |
Posted: 2016-12-01T21:58:05Z | Updated: 2016-12-02T18:47:02Z Experts Weigh In On Autonomous Weapons | HuffPost

Experts Weigh In On Autonomous Weapons

Experts Weigh In On Autonomous Weapons
|
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.
Open Image Modal
Shutterstock

FLIs Ariel Conn recently spoke with Heather Roff and Peter Asaro about autonomous weapons. Roff, a research scientist at The Global Security Initiative at Arizona State University and a senior research fellow at the University of Oxford, recently compiled an international database of weapons systems that exhibit some level of autonomous capabilities. Asaro is a philosopher of science, technology, and media at The New School in New York City. He looks at fundamental questions of responsibility and liability with all autonomous systems, but hes also the Co-Founder and Vice-Chair of the International Committee for Robot Arms Control and a hes a Spokesperson for the Campaign to Stop Killer Robots.

The following interview has been edited for brevity, but you can read it in its entirety here or listen to it here .

ARIEL: Dr. Roff, Id like to start with you. With regard to the database, what prompted you to create it, what information does it provide, how can we use it?

ROFF: The main impetus behind the creation of the database [was] a feeling that the same autonomous or automated weapons systems were brought out in discussions over and over and over again. It made it seem like there wasnt anything else to worry about. So I created a database of about 250 autonomous systems that are currently deployed [from] Russia, China, the United States, France, and Germany. I code them along a series of about 20 different variables: from automatic target recognition [to] the ability to navigate [to] acquisition capabilities [etc.].

Its allowing everyone to understand that autonomy isnt just binary. Its not a yes or a no. Not many people in the world have a good understanding of what modern militaries fight with, and how they fight.

ARIEL: And Dr. Asaro, your research is about liability. How is it different for autonomous weapons versus a human overseeing a drone that just accidentally fires on the wrong target.

ASARO: My work looks at autonomous weapons and other kinds of autonomous systems and the interface of the ethical and legal aspects. Specifically, questions about the ethics of killing, and the legal requirements under international law for killing in armed conflict. These kinds of autonomous systems are not really legal and moral agents in the way that humans are, and so delegating the authority to kill to them is unjustifiable.

One aspect of accountability is, if a mistake is made, holding people to account for that mistake. Theres a feedback mechanism to prevent that error occurring in the future. Theres also a justice element, which could be attributive justice, in which you try to make up for loss. Other forms of accountability look at punishment itself. When you have autonomous systems, you cant really punish the system. More importantly, if nobody really intended the effect that the system brought about then it becomes very difficult to hold anybody accountable for the actions of the system. The debate its really kind of framed around this question of the accountability gap.

ARIEL: One of the things we hear a lot in the news is about always keeping a human in the loop. How does that play into the idea of liability? And realistically, what does it mean?

ROFF: I actually think this is just a really unhelpful heuristic. Its hindering our ability to think about whats potentially risky or dangerous or might produce unintended consequences. So heres an example: the UKs Ministry of Defense calls this the Empty Hangar Problem. Its very unlikely that theyre going to walk down to an airplane hangar, look in, and be like, Hey! Wheres the airplane? Oh, its decided to go to war today. Thats just not going to happen.

These systems are always going to be used by humans, and humans are going to decide to use them. A better way to think about this is in terms of task allocation. What is the scope of the task, and how much information and control does the human have before deploying that system to execute? If there is a lot of time, space, and distance between the time the decision is made to field it and then the application of force, theres more time for things to change on the ground, and theres more time for the human to basically [say] they didnt intend for this to happen.

ASARO: If self-driving cars start running people over, people will sue the manufacturer. But theres no mechanisms in international law for the victims of bombs and missiles and potentially autonomous weapons to sue the manufacturers of those systems. That just doesnt happen. So theres no incentives for companies that manufacture those [weapons] to improve safety and performance.

ARIEL: Dr. Asaro, weve briefly mentioned definitional problems of autonomous weapons how does the liability play in there?

ASARO: The law of international armed conflict is pretty clear that humans are the ones that make the decisions, especially about a targeting decision or the taking of a human life in armed conflict. This question of having a system that could range over many miles and many days and select targets on its own is where things are problematic. Part of the definition is: how do you figure out exactly what constitutes a targeting decision, and how do you ensure that a human is making that decision? Thats the direction the discussion at the UN is going as well. Instead of trying to define whats an autonomous system, what we focus on is the targeting decision and firing decisions of weapons for individual attacks. What we want to acquire is meaningful human control over those decisions.

ARIEL: Dr. Roff, you were working on the idea of meaningful human control, as well. Can you talk about that?

ROFF: If [a commander] fields a weapon that can go from attack to attack without checking back with her, then the weapon is making the proportionality calculation, and she [has] delegated her authority and her obligation to a machine. That is prohibited under IHL, and I would say is also morally prohibited. You cant offload your moral obligation to a nonmoral agent. So thats where our work on meaningful human control is: a human commander has a moral obligation to undertake precaution and proportionality in each attack.

ARIEL: Is there anything else you think is important to add?

ROFF: We still have limitations of AI. We have really great applications of AI, and we have blind spots. It would be really incumbent on the AI community to be vocal about where they think there are capacities and capabilities that could be reliably and predictably deployed on such systems. If they dont think that those technologies or applications could be reliably and predictably deployed, then they need to stand up and say as much.

ASARO: Were not trying to prohibit autonomous operations of different kinds of systems or the development and application of artificial intelligence for a wide range of civilian and military applications. But there are certain applications, specifically the lethal ones, that have higher standards of moral and legal requirements that need to be met.

Your Support Has Never Been More Critical

Other news outlets have retreated behind paywalls. At HuffPost, we believe journalism should be free for everyone.

Would you help us provide essential information to our readers during this critical time? We can't do it without you.

Support HuffPost