AI could have catastrophic consequences is Canada ready? - Action News
Home WebMail Friday, November 22, 2024, 04:46 AM | Calgary | -13.8°C | Regions Advertise Login | Our platform is in maintenance mode. Some URLs may not be available. |
Politics

AI could have catastrophic consequences is Canada ready?

A new report commissioned by the U.S. State Department is warning governments are running out of time to implement comprehensive safeguards on the development and regulation of advanced artificial intelligence and that the future of the human species could be at stake.

AI researcher warns governments are running out of time to put in place comprehensive safeguards

AMECA, an AI robot from IVADO Labs, is seen at the All In artificial intelligence conference Thursday, September 28, 2023  in Montreal.
AMECA, an AI robot from IVADO Labs, at the All In artificial intelligence conference in Montreal on Thursday, Sept. 28, 2023. (Ryan Remiorz/The Canadian Press)

Nations Canada included are running out of time to design and implement comprehensive safeguards on the development and deployment ofadvanced artificial intelligencesystems, aleading AI safety company warned this week.

In a worst-case scenario, power-seeking superhuman AI systems could escapetheir creators'controland pose an "extinction-level" threattohumanity, AI researchers wrote in areport commissioned by the U.S. Department of StateentitledDefence in Depth: An Action Plan to Increase the Safety and Security of Advanced AI.

The department insists the views the authors expressed in the reportdo not reflect the views of the U.S. government.

But the report's message isbringing the Canadian government's actions to date on AI safety and regulation back into the spotlight andone Conservative MP is warning the government's proposed Artificial Intelligence and Data Act is already out of date.

AI vs. everyone

The U.S.-based company Gladstone AI, which advocates for the responsible development of safe artificial intelligence, produced the report. Its warningsfall into two main categories.

The first concerns the risk ofAI developers losing control of an artificial general intelligence (AGI) system.The authors define AGI as an AI system that can outperform humans across all economic and strategically relevant domains.

While no AGI systems exist to date, many AI researchers believe they are not far off.

"There is evidence to suggest that as advanced AI approaches AGI-like levels of humanand superhuman general capability, it may become effectively uncontrollable. Specifically, in the absence of countermeasures, a highly capable AI system may engage in so-called power seeking behaviours," the authors wrote, adding that these behaviours could include strategies to prevent the AI itself from being shut off or having its goals modified.

In a worst-case scenario, the authors warn that such a loss of control "could pose an extinction-level threat to the human species."

"There's this risk that these systems start to get essentially dangerously creative. They're able to invent dangerously creative strategies that achieve their programmed objectives while having very harmful side effects. So that's kind of the risk we're looking at with loss of control," Gladstone AI CEO Jeremie Harris, one of the authors of the report,said Thursday in an interview with CBC'sPower & Politics.

Artificial intelligence could pose extinction-level threat to humans, expert warns

5 months ago
Duration 8:08
A new report is warning the U.S. government that if artificial intelligence laboratories lose control of superhuman AI systems, it could pose an extinction-level threat to the human species. Gladstone AI CEO Jeremie Harris, who co-authored the report, joined Power & Politics to discuss the perils of rapidly advancing AI systems.

The second category of catastrophic risk cited in the report is the potentialuse of advanced AI systems as weapons.

"One example is cyber risk," Harris told P&P host David Cochrane. "We're already seeing, for example, autonomous agents. You can go to one of these systems now and ask,... 'Hey, I want you to build an app for me, right?' That's an amazing thing. It's basically automating software engineering.This entire industry. That's a wicked good thing.

"But imagine the same system ...you're asking it to carry out a massive distributed denial of service attack or some other cyber attack. The barrier to entry for some of these very powerfuloptimization applications drops, and the destructive footprint of malicious actors who use these systems increases rapidly as they get more powerful."

Harris warned that the misuse of advanced AI systems could extend into the realm of weapons of mass destruction, including biological and chemical weapons.

The report proposes a series of urgent actions nations, beginning with the U.S., should take to safeguard against these catastrophic risks, including export controls, regulationsand responsible AI development laws.

Is Canada's legislation already defunct?

Canada currently has no regulatory framework in place that is specific to AI.

The government introduced the Artificial Intelligence and Data Act (AIDA) as part of Bill C-27 in November of 2021. It'sintended to setafoundation for the responsible design, development and deployment of AI systems in Canada.

The bill has passed second reading in the House of Commons and is currently being studied by the industry and technology committee.

The federal government also introduced in 2023 the Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems, a code designed to temporarily provide Canadian companies with common standards until AIDA comes into effect.

At a press conference on Friday, Industry MinisterFranois-Philippe Champagne was askedwhy given the severity of the warnings in the Gladstone AI report he remains confident that the government's proposed AI bill is equipped to regulate the rapidly advancing technology.

"Everyone is praising C-27," said Champagne. "I had the chance to talk to my G7 colleagues and ... they see Canada at the forefront of AI, you know, to build trust and responsible AI."

Conservative member of Parliament Michelle Rempel Garner holds a press conference on Parliament Hill in Ottawa on Tuesday, April 5, 2022, to discuss her Private Members Bill, Bill C-249, the Encouraging Growth of the Cryptoasset Sector Act.
Conservative member of Parliament Michelle Rempel Garner says the government's proposed artificial intelligence bill is out of date and inadequate. Rempel Garner is pictured here holding a press conference on Parliament Hill in Ottawa on Tuesday, April 5, 2022. (Sean Kilpatrick/The Canadian Press)

In an interview with CBC News, Conservative MP Michelle Rempel Garner said Champagne's characterization of Bill C-27 was nonsense.

"That's not what the experts have been saying in testimony at committee and it's just not reality," said Rempel Garner, who co-chairs the Parliamentary Caucus on Emerging Technology and has been writing about the need for government to act faster on AI.

"C-27 is so out of date."

AIDA was introduced before OpenAI, one of the world's leading AI companies, unveiled ChatGPT in 2022. The AI chatbot represented astunning evolution in AI technology.

"The fact that the government has not substantively addressed the fact that they put forward this bill before a fundamental change in technology came out ... it's kind of like trying to regulate scribes after the printing press has gone into widespread distribution," said Rempel Garner. "The government probably needs to go back to the drawing board."

OpenAI CEO Sam Altman attends a Senate Judiciary Subcommittee on Privacy, Technology and the Law hearing on artificial intelligence, Tuesday, May 16, 2023, on Capitol Hill in Washington.
OpenAI CEO Sam Altman attends a Senate Judiciary Subcommittee on Privacy, Technology and the Law hearing on artificial intelligence, Tuesday, May 16, 2023, on Capitol Hill in Washington. (Patrick Semansky/The Associated Press)

In December 2023, Gladstone AI's Harris told the House of Commons industry and technology committee that AIDA needs to be amended.

"By the time AIDA comes into force, the year will be 2026. Frontier AI systems will have been scaled hundreds to thousands of times beyond what we see today," Harris told MPs. "AIDA needs to be designed with that level of risk in mind."

Harris told the committee that AIDA needs to explicitly ban systems that introduce extreme risks, address open source development of dangerously powerful AI models, and ensure that AI developers bear responsibility for ensuring the safe development of their systems by, among other things, preventing theirtheft bystate and non-state actors.

"AIDA is an improvement over the status quo, but it requires significant amendments to meet the full challenge likely to come from near-future AI capabilities," Harris told MPs.