CIFAR leader expects CAISI to help inform AI policy in Canada and abroad

With its new AI safety institute, Canada aims to study the technology’s risks and collaborate with other countries.

Artificial intelligence (AI) leaders believe that Canada’s new AI safety institute could help strengthen AI policy, adoption, and the country’s role on the global stage.

Elissa Strome, executive director of the Pan-Canadian AI Strategy at the Canadian Institute for Advanced Research (CIFAR), said she sees room for the Canadian Artificial Intelligence Safety Institute (CAISI) to inform not just our knowledge of AI, but how to use and regulate it, both domestically and abroad.

Lawyer Carole Piovesan, who specializes in AI, credited the federal government for drawing on Canada’s existing strengths and infrastructure and focusing on understanding and mitigating some of the bigger risks associated with AI through CAISI.

“Global co-operation is going to be essential.”

Elissa Strome, CIFAR

Launched last week by Canada’s Liberal government, CAISI has been tasked with studying some of the risks associated with “advanced or nefarious” AI systems and how to mitigate them, in collaboration with other countries around the world.

The feds committed $50 million CAD over five years to CAISI in Budget 2024 as part of a larger $2.4-billion AI package containing funding for AI computing and startups. The feds have also allocated $27 million to CIFAR, which already leads the Pan-Canadian AI Strategy, to administer CAISI’s research stream.

With CAISI’s launch, Canada has joined countries such as the United States and the United Kingdom, which have established similar AI safety institutes.

CAISI, which will be housed within Innovation, Science, and Economic Development Canada (ISED), will help connect the country’s existing AI research infrastructure through CIFAR, the National Research Council of Canada, and its three AI research hubs—Edmonton’s Amii, the Toronto-based Vector Institute, and Montréal’s Mila.

Piovesan, co-founder and managing partner at INQ Law, highlighted in an interview with BetaKit that “The institute is leveraging a lot of what we already have in place.”

Piovesan, who focuses on AI risk management at INQ Law, is an adjunct professor on AI regulation at the University of Toronto, and has previously worked with the federal government on AI, noted that CAISI appears to be taking what Canada is already strong at and aligning those capabilities with a policy agenda around AI safety.

“I do think it makes sense that we are launching this and that we’re doubling down on how we can better understand and mitigate some of the more profound risks around [AI],” she added.

RELATED: Evolving Canada’s AI strategy with CIFAR’s Elissa Strome

In an interview with BetaKit, Strome noted that there has been growing concern over the increasing capabilities of advanced AI systems, particularly so-called frontier models

Strome noted that the launch of OpenAI’s generative AI chatbotChatGPT in late 2022 left many technologists, governments, and industry players surprised by how powerful large language models were and triggered “a robust global conversation” about the risks of AI with capabilities beyond our understanding and existing regulatory frameworks.

“There’s quite a lot of research that we still need to do to understand how and where and when and why these AI systems are making the decisions that they make, how are they developing the capabilities that they have, [and] when and where and why do they hallucinate,” Strome said.

CAISI will assess AI risks and test AI systems. It will also develop guidance around the detection of AI-generated content, the evaluation of advanced AI models, and ensuring privacy in AI systems.

Strome indicated that CAISI will focus on the larger, systemic risks associated with advanced AI as it becomes increasingly sophisticated, including how advanced AI can destabilize institutions and impact democracy through misleading AI-generated media and disinformation. 

RELATED: Federal government commits $2.4 billion to AI compute, startups, and safety through Budget 2024

CAISI will take guidance from last year’s International Scientific Report on the Safety of Advanced AI, CAISI’s international partners, and Canada’s own AI research community, she said. Leaders from around the world, including a Canadian delegation, are convening next week in San Francisco to discuss AI safety.

While Canada is widely viewed as a global leader in AI research, reports indicate that domestic AI adoption is lagging. Piovesan highlighted that CAISI’s launch comes as the majority of Canadian companies do not plan to adopt AI, many of which have identified it as a tool that is not relevant to their operations. She cited a Sept. 2024 survey and analysis from Statistics Canada.  

“That’s a risk to Canada in terms of its ongoing competitiveness,” she added.

Through its research, Strome anticipates that CAISI will help lay the foundation for downstream adoption of AI across Canadian businesses and other organizations.

Strome believes it is critical that CAISI is led by the federal government given the collaboration required between jurisdictions. “We really do very much need leadership from within the government, and policy expertise and diplomatic expertise at that table as well,” she said.

RELATED: Vector Institute leaders chart AI hub’s progress and remaining challenges

“AI is a technology that doesn’t know any borders and is truly global in its scope, and global co-operation is going to be essential,” she added.

CAISI is the latest in a number of AI initiatives rolled out by Canada’s Liberal government, joining the Pan-Canadian AI Strategy, the Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems, and the proposed Artificial Intelligence and Data Act, which remains stalled at committee.

Piovesan hopes that CAISI will help continue to foster Canada’s leadership in AI safety and provide evidence to serve as the basis for AI regulation. Strome expects CAISI’s research to do that, and sees room for it to help Canada play an important role globally on the AI safety front.

“Because we have such a long history in research, because we have such a strong, concentrated AI research ecosystem, and investments that we can leverage to contribute to this global effort now, I think we have the opportunity to have an outsized impact,” Strome said. “And I think that Canadian research on AI safety will contribute significantly to addressing some of the problems associated with AI risks.”

Feature image courtesy CAISI. Photo by Jean Lemieux.

0 replies on “CIFAR leader expects CAISI to help inform AI policy in Canada and abroad”