Ottawa unveils new AI code of conduct for Canadian companies - Action News
Home WebMail Friday, November 22, 2024, 10:44 AM | Calgary | -10.8°C | Regions Advertise Login | Our platform is in maintenance mode. Some URLs may not be available. |
Politics

Ottawa unveils new AI code of conduct for Canadian companies

Industry Minister Franois-Philippe Champagne unveiled a new voluntary AI code of conduct for Canadian businesses Wednesday, saying establishing public trust in AI will help promote innovation and development.

Code comes a day after the industry minister announced amendments to the government's AI and privacy bill

A man at a podium
Industry Minister Franois-Philippe Champagne has unveiled a new voluntary code of conduct for the use of advanced generative AI systems in Canada. (Justin Tang/The Canadian Press)

The federal government has unveiled a voluntary code of conduct for the use of advanced generative artificial intelligence systems in Canada that includescommitments to transparency and avoiding bias.

Speaking at the All In conference on AI in Montreal on Wednesday, Industry Minister Franois-Philippe Champagne said the code will complement legislation making its way through Parliament, Bill C-27, and promote the safe development of AI systems in Canada.

"While we are developing a law here in Canada, it will take time," he said."And I think that if you ask people in the street, they want us to take action now to make sure that we have specific measures that companies can take now to build trust in their AI products."

Generative AI includes products like ChatGPT and other systems that can create things like text, images, music or video.

Among the dozen companies and groups that have agreed to sign the voluntary code of conduct are BlackBerry, OpenText, Telus and the Canadian Council of Innovators, which represents more than 100 start-up companies across Canada.

Audrey Champoux, spokesperson for Champagne, said the government will work to convince more businesses and groups to adopt the code of conduct.

Isabelle Hudon, president of the Business Development Bank of Canada (BDC) praised the government's decision to make the code of conduct voluntary rather than obligatory. In a panel discussion during the conference following Champagne's announcement, she said BDC, a Crown corporation, has been having internal debates on how to use AI to serve clients and improve productivity.

"I do believe your code of conduct will guide us into making the right decisions at the right moment for the right reasons," she said.

Tobi Lutke, CEO of Shopify, was critical of the government's initiative, describing it as "another case of EFRAID."

"I won't support it," Lutke posted on X, formerly known as Twitter. "We don't need more referees in Canada. We need more builders. Let other countries regulate while we take the more courageous path and say 'come build here.'"

The two-page code of conduct commits signatories to supporting the development of a "robust, responsible AI ecosystem in Canada,"contributing to the development of standards, sharing information and collaborating with researchers.

They agree to put in place "appropriate risk management systems," make their systems subject to risk assessments, protect them against cyber attacks and commit to human oversight and monitoring after they are deployed.

One of the concerns often expressed about AI systems is the risk of bias being baked into the algorithms, resulting in discriminatory decisions. The code of conduct commits signatories to assessing and addressing discriminatoryimpacts at different phases of development and deployment of a system.

The code also calls for companies to be transparent about the AI systems they are using.

"Sufficient information is published to allow consumers to make informed decisions and for experts to evaluate whether risks have been adequately addressed," says the code.

The code says signatories also commit to developing and using AI systems "in a manner that will drive inclusive and sustainable growth in Canada, including by prioritizing human rights, accessibility and environmental sustainability, and to harness the potential of AI to address the most pressing global challenges of our time."

Champagne said developing AI's potentialin Canada requires restoring publicconfidence in its safety.

"We have restored trust with the people on responsible AI. (Now) we move from trust to innovation," he said, adding Canada should be a world leader in developing responsible AI.

Champagne's announcement comes a day after he told members of the House of Commons industry committee thatCanada is at the forefront on developing a framework and rules for the use of artificial intelligence something that the U.S. and Europe are watching closely as the technology develops quickly.

"I hope that colleagues feel the same level of urgency I feel, because every day we kind of learn of a new aspect of that technology that goes beyond what we have already seen. So having a framework, I think, will be much needed and it will help with responsible AI," he said.

The government has taken the first stepstoward governing AI, including theArtificial Intelligence and Data Act, part of the broaderBill C-27, which was tabled in June 2022. The legislation would also update Canada's privacy law something that hasn't been done in two decades.

Champagne announced a series of amendments to that legislation on Tuesday, saying they respond to feedback he has received.

A woman takes a picture of a humanoid robot with a transparent skull.
A humanoid robot is seen at an AI summit in Geneva, Switzerland, on July 5. (Martial Trezzini/Keystone/The Associated Press)

Among the amendments will be one that defines what constitutes a "high-impact AI system" covered by the legislation, Champagne told the committee.

"We will propose an amendment to define classes of systems that would typically be considered high impact for example AI systems that make important decisions about loans or employment," he said.

Other amendments, he said, will "introduce specific and distinct obligations for general-purpose AI systems like ChatGPT" and make it clearer what obligations the developer of an AI system has versus someone who manages and deploys it.

The government will also "strengthen and clarify" the role of the proposed AI and Data Commissioner provided for in the legislation and enable them to share information and co-operate with other regulators like the Competition Commissioner or the Privacy Commissioner, Champagne said.

Amendments would also recognize a fundamental right to privacy for Canadians, along with the bill's existing provisions to allow people to transfer their data to another company or have it deleted.

Champagne said the government has also decided to beef up protection for children's online information and will give the Privacy Commissioner more flexibility to reach compliance agreements with companies that violate the privacy law.