Use of controversial surveillance technology demonstrates the need to limit police power - Action News
Home WebMail Friday, November 22, 2024, 02:58 PM | Calgary | -10.4°C | Regions Advertise Login | Our platform is in maintenance mode. Some URLs may not be available. |
Opinion

Use of controversial surveillance technology demonstrates the need to limit police power

Its clear that police employing surveillance technology is not only an issue of privacy, its also an issue of racism, writes Kate Schneider.

This news is deeply unsettling, and not just for the privacy implications

Multiple police services in Canada have acknowledged using controversial facial recognition technology from Clearview AI. This news should make us even more skeptical of expanding police power, writes Kate Schneider. (Photo illustration/CBC)

This column is an opinionby Kate Schneider,a master's student at the University of Oxford from Waterloo, Ont.For more information aboutCBC's Opinion section, please see theFAQ.

Last month, CBC released a report divulging new details about the Toronto Police Service's use of Clearview AI's controversial surveillance technology. The findings confirmed that Toronto police had employed facial recognition software to identify both suspects and victims in several dozen police investigations.

These findings built on news from February 2020 that initially revealed several officers had used a trial version of the software, despite their denial of its use a month prior.

This news in itself is deeply unsettling, and not just for the privacy implications. It reveals a concerning degree of power held by police forces and how certain technologies can enable the abuse of that power.

Concerns about Clearview AI

The Toronto Police Service is not the only law enforcement agency in Canada to have come under fire for its relationship with Clearview AI. These revelations were announced in the wake of Canada's Privacy Commissioner ruling in June that the RCMP's use of Clearview AI to scrape online images of Canadians without their consent violated the federal Privacy Act. Police departments in Vancouver, Edmonton, Calgary, and Ottawa have also disclosed using or "testing" this software in the past.

Clearview AI is based in the United States, yet is well-known globally for its facial recognition software. Multiple police departments around the world have admitted to using this technology, including departments in the United States, France, Australia, and the United Kingdom.Most of these countries have asked Clearview to purge its database of images collected there. It is projected that one-quarter of U.S. police forces have facial recognition tools at their disposal.

This facial recognition technology can be applied in a number of situations. Police have been criticized for using it to identify protestors at public demonstrations. They can also pull footage from CCTV cameras near crime scenes and attempt to match the identified faces with Clearview AI's alarmingly vast database of over 10 billion images scraped from social media websites.

Clearview AI's capabilities are becoming even more terrifyingly sophisticated. In October 2021, CEO Hoan Ton-That announced that Clearview was developing new facial recognition tools that could unblur faces disguised for privacy reasons or identify someone even when masked.

Police facing scrutiny

In a time when law enforcement agencies have already come under heightened scrutiny through movements like Defund the Police, Canadian police forces' relationship with Clearview AI should make us even more skeptical of expanding police power.

In particular, the ability of police to surveil Canadians is most concerning for the potential impacts on racialized people, especially Black and Indigenous individuals.

Although we sometimes pretend that racism is exclusively an American problem, Canada has its own established history of racial discrimination carried out by police. As activist and writer Desmond Cole has documented, Canadian police have upheld racially discriminatory programs, such as carding. An Ontario Human Rights Commission report in 2020 also found that Toronto police disproportionately targeted Black Canadians.

Technology is often portrayed as less biased due to assumptions that it eliminates human prejudice. However, police surveillance software has been shown to misidentify racialized individuals at a higher rate than white suspects.

With all these factors compounded together, it's clear that police employing surveillance technology is not only an issue of privacy. It's also an issue of racism.

The way forward

Canadian police forces' use of Clearview AI demonstrates a need to regulate facial recognition surveillance technologies due to their disturbing abilities to violate our privacy. More fundamentally, it also shows the need to be increasingly wary of the power wielded by police in Canada.

As shown, the speed of technological innovation and the correspondingly more sophisticated tools available to law enforcement will only continue to exacerbate the risks of allowing extensive police power. While all Canadians should be concerned, our country's previous history of policing shows that racialized individuals will most likely disproportionately suffer the consequences.

Experts and advocates against police violence have already laid out multiple suggestions for how we can limit police power and keep our communities safer in alternative ways. The findings about Canadian police and Clearview AI demonstrate that it's time we paidcareful attention to these demands and act upon them.


Do you have a strong opinion that could add insight, illuminate an issue in the news, or change how people think about an issue? We want to hear from you. Here'show to pitch to us.