Are killer robots inevitable? Tech world uneasy about use of AI in warfare | CBC Radio - Action News
Home WebMail Saturday, November 23, 2024, 08:10 AM | Calgary | -12.1°C | Regions Advertise Login | Our platform is in maintenance mode. Some URLs may not be available. |
The Current

Are killer robots inevitable? Tech world uneasy about use of AI in warfare

Experts in the field of AI are calling for bans or strict limits on its use in warfare, but some say that the potential of the technology is too great to contain.
Arnold Schwarzenegger in the 2003 film Terminator 3. Advances in artificial intelligence are raising concerns about its potential uses on the battlefield. (Robert Zuckerman/Warner Bros. Pictures/The Associated Press)

Read Story Transcript

The use of artificial intelligencein warfare may have once been the stuff of science-fiction films like The Terminator, but now the age of autonomous killing machines may be coming sooner than you think.

Last week, more than 3,000 employees at Google signed an open letter saying they believed that the company "should not be in the business of war."

But one expert said that trying to keep AI off the battlefield comes up against one significant issue: it's toouseful.

"Most of the leading researchers in this field compare AI's impact to the impact of electricity," said Gregory Allen, an adjunct fellow at the Center for a New American Security in Washington.

"It would be laughable to try and say that we should ban the use of electricity in warfare," he toldThe Current'sguest host Gillian Findlay.

Countries aside from the U.S. are pursuing more aggressive development strategies, he said. Becauseso much of the research conducted is open-source and available online, he adds, there's no realistic way to stop the information from being used.

Gregory Allen said artificial intelligence current military capabilities are nowhere near the level of rampaging robots that people envision. (Mario Anzuoni/Reuters)

Google employees were reacting to the news that the company has partnered with the Pentagon on Project Maven. While not a lot is known about the project, the tech company has said its aim is to use AI to interpret video imagery.

Ian Kerr, a law and technology expert at theUniversity of Ottawa Faculty of Law,said that it coulduse personal data, including facial recognition technology,to improve the accuracy of drone strikes against individuals.

"Google would be using our personal information, without our knowledge and consent, to help the Pentagon make targeting decisions," he said, "potentially about who to kill, maybe with, maybe without human involvement."

"The idea of delegating life or death decisions to a machine crosses a fundamental moral line," he said.

"Maven is a well-publicized DoD project and Google is working on one part of it specifically scoped to be for non-offensive purposes," Google told The Current in a statement. "The technology is used to flag images for human review and is intended to save lives and save people from having to do highly tedious work."

Allen disagrees that Project Maven is about taking personal information from Google's data sets and using that in a military context. The project involves taking video and still images from drones, and then applying "a minimal amount of analysis," he said.

An unmanned U.S. Predator drone flies over southern Afghanistan in 2010. Critics argue the work of Project Maven could be ultimately used to improve drone-strike accuracy. (Kirsty Wigglesworth/Associated Press)

"This technology is counting things... it's saying: 'In this picture there are five buildings, in this picture there are three cars and there are two people.'"

"It is not even at the level of characterizing activity, as in: 'This person is walking' much less saying: 'This person is consistent with the Google user ID X.'"

Kerr's concern is where these developing technologies can end up. Companies like Google have access to enormous datasets, he said, which nation states or other organizationscould use for violent ends.

"The idea isn't that weapons should be off limits," he said."The idea is that we do not delegate control-of-kill decisions to weapons," he said.

"We have to maintain meaningful human control when it comes to killing."

Listen to the full conversation at the top of this page, where you can also share this article across email, Facebook, Twitter and other platforms.


This segment was produced by The Current's Geoff Turner and Howard Goldenthal.