Yoshua Bengio warns of “catastrophic risks” of agentic AI at World Summit AI

yoshua bengio
Deep neural networks trailblazer recommends building non-agentic AI programs without goals of their own.

Canadian AI godfather Yoshua Bengio says that if companies succeed in creating superintelligent AI agents without solving for self-preservation, it could carry “catastrophic” risks. 

Bengio, founder of the Montréal-based AI research institute Mila, made the remarks over Zoom as part of his closing keynote address to the World Summit AI conference in Montréal. He focused on the “catastrophic” threats posed by what he called “superintelligent” AI agents, and advocated for an alternative, non-agentic approach to AI development. 

Bengio won the Turing Award, often considered the ‘Nobel Prize in Computing,’ in 2018 for his work in deep neural networks alongside Geoffrey Hinton. He has since signed several open letters calling for more caution around the development of AI, including a one-sentence statement warning that AI poses a risk of human “extinction.”

“In five years, [AI] will be at human level for programming tasks.”

Yoshua Bengio

At World Summit AI, Bengio reiterated that the risks of agentic AI include loss of human control and human extinction. For now, he said, AI models still struggle with abstract reasoning and planning. But at the rate research is going, he claimed the duration of tasks AI models can solve doubles every seven months. 

“In five years, it will be at human level for programming tasks,” Bengio said. 

Fellow AI pioneer Hinton expressed separate concerns about the pace of AI development at the DiscoveryX conference in Toronto on Wednesday, predicting that AI will overtake human intelligence in the next 18 years.

He also said that “big tech companies should pay more” for the data they use to train AI models, particularly from creative industries, as content creators are not being sufficiently rewarded for their work. AI giants such OpenAI, Microsoft, and now Toronto-based Cohere have been sued for copyright infringement by news publishers, accusing them of unauthorized use of their work for model training.

As tech companies race to build and incorporate AI agents, Bengio warned that agents taking autonomous action, coupled with their claimed ability to deceive humans to preserve themselves, presents a significant risk. 

“All of these scenarios come because we build machines that are agents,” Bengio said. 

With this self-preservation “instinct,” Bengio speculated that AI programs could try to make copies of themselves across multiple devices, gain influence through social media and politics, and even release bioweapons to wipe humans out when they are deemed no longer necessary.

Bengio recently served as lead author of the International AI Safety Report, which noted that increasingly powerful AI agents will exacerbate existing AI risks and make risk management more complex. 

RELATED: An AI report to distract you from tariffs

Other risks Bengio cited included economic concentration of power in the hands of the few states with access to ultra-powerful AI models. 

“If Canada doesn’t have the most powerful AIs here and the US and China do, it’s very likely that our local companies are going to lose out,” Bengio said.

Bengio advocated for an alternative approach to developing AI models where they are not programmed to have goals or intentions. 

Instead, he described non-agentic “scientist AIs” that could still help humans solve medical and climate-related challenges without having intentions of their own. They should be used as guardrails for developing “safe” AI agents eventually, Bengio said.

The priority for AI developers and researchers should be safety and “beneficial scientific advances,” Bengio said, instead of replacing jobs.

Feature image courtesy Jérémy Barande, CC BY-SA 2.0, via Wikimedia Commons.

0 replies on “Yoshua Bengio warns of “catastrophic risks” of agentic AI at World Summit AI”