Researchers say it's time to crack open AI 'black boxes' and look for the biases inside - Action News
Home WebMail Saturday, November 23, 2024, 05:48 PM | Calgary | -11.4°C | Regions Advertise Login | Our platform is in maintenance mode. Some URLs may not be available. |
Science

Researchers say it's time to crack open AI 'black boxes' and look for the biases inside

As algorithms make more critical decisions affecting our lives, it has become more difficult to understand and challenge how those decisions are made, a new report says.

It's becoming more difficult to understand and challenge algorithms that make big decisions in our lives

ProPublica found last year that a proprietary algorithm used in the U.S. to predict the likelihood that a person who committed a crime would reoffend was biased against black offenders. (Mike Laanela/CBC)

Courts, schoolsand other public agencies that make decisions using artificial intelligence should refrain from using "black box" algorithms that aren't subject to outside scrutiny, a group of prominent AI researchers says.

The concern is that, asalgorithms become increasingly responsible for making critical decisions affecting our lives, it has become more difficult to understand and challenge how those decisions which in some cases, have been found to have racist or sexist biases are made.

It's one of a handful of recommendations from New York University's' AI Now Institute, which examines the social impact of AI on areas such as civil liberties and automation. The group which countsresearchers Kate Crawford of Microsoftand Meredith Whittaker of Googleamong its members released its second annual report on Wednesday afternoon.

AI Now is part of an increasingly vocal group of academics, lawyersand civil liberties advocates that has been calling for greater scrutiny of systems that rely on artificial intelligence especially where those decisions involve "high stakes" fields such as criminal justice, health care, welfareand education.

Given the growing role algorithms play in so many parts of our lives such as those used by Facebook, one of its data centres pictured here we know incredibly little about how these systems work. (Jonathan Nackstrand/AFP/Getty Images)

In the U.S., for example, there are already automated decision-making systems being used to decide who to promote,who to loan moneyand which patients to treat, the report says.

"The way that these systems work can lead to bias or replica the biases in the status quo, and without critical attention they can do as much harm if not more harm in trying to be, supposedly, objective," says Fenwick McKelvey, an assistant professor at Concordia University in Montreal who researches how algorithms influencewhat people see online.

Obscuring inequalities

McKelvey points to a recent example involving risk assessments ofCanadian prisoners up for parole, in which a Mtis inmate is going before the Supreme Court to argue the assessments discriminate against Indigenous offenders.

Were such a system to ever be automated, there's a good chance it wouldamplify such a bias, McKelvey says. That was whatProPublicafound last year, when a proprietary algorithm used in the U.S.to predict the likelihood that a person who committed a crime would reoffend was shown to bebiased against black offenders.

"If we allow these technical systems to stand in for some sort of objective truth, we mask or obfuscate the kind of deep inequities in our society," McKelvey said.

Part of the problem, saysAI Now, is that although algorithms are often seen as neutral, some have been foundto reflect the biases within the data used to train them whichcan reflect the biases of thosewho create the data sets.

"Those researching, designing and developing AI systems tend to be male, highly educated and very well paid," the report says. "Yet their systems are working to predict and understand the behaviours and preferences of diverse populations with very different life experiences.

"More diversity within the fields building these systems will help ensure that they reflect a broader variety of viewpoints."

Going forward, the group would like to see more diverse experts from a wider range of fields and not just technical experts involved in determining the future of AI research, and working to mitigate bias in how AI is used in areas such as education, health careand criminal justice.

There have also been calls for public standards for auditing and understanding algorithmic systems, the use ofrigorous trials and tests to root out bias before the systems are deployed, and ongoing efforts to monitor those systems for bias and fairnessafter release.