Richard Sutton warns against “centralized control” of AI regulation based on fear

At Upper Bound conference, Turing Award winner says human-like AI is “inevitable.”

Canadian Turing Award winner Richard Sutton says humans should look at artificial intelligence (AI) with “courage, pride, and a sense of adventure” and pushed back against approaches to regulating the technology based on fear. 


Sutton advocated for “decentralized cooperation,” where people—or machines—are pursuing different goals but to a mutual benefit.

The chief scientific advisor at the Alberta Machine Intelligence Institute (Amii) made the remarks in his closing keynote speech at Amii’s Upper Bound conference in Edmonton. The researcher spoke about the next phase of experiential AI development and encouraged attendees to embrace an “inevitable” future with intelligent machines. 

Sutton drew a parallel between regulating AI and the “centralized control” of people, examples of which include calls to control speech, tariffs, and economic sanctions.  

“There are many calls to centralized control of AI,” Sutton said. “Letters to pause AI, or to align them to people’s goals, limit the computer power of AI, and of course, ‘safety’ is such a big issue.”

He added that the arguments for centralized control of AI and humans are “eerily similar” and driven by fear. Instead of this approach, Sutton advocated for “decentralized cooperation,” where people—or machines—are pursuing different goals but to a mutual benefit. However, he added that humans often “suck” at cooperation. 

Sutton’s remarks come as Canadian tech companies and governments are embracing the technology. The researcher’s talk at Upper Bound was preceded by a short virtual greeting from Canada’s new AI and digital innovation minister, Evan Solomon, who said that building Canada into a world AI leader is a priority. Yesterday, Prime Minister Mark Carney published a mandate letter to his cabinet ministers calling for them to use AI in their work to boost productivity. 

“AI’s remarkable potential spans almost every sector of our society,” Solomon said. “With such broad reach and huge potential comes a need for thoughtful and intentional action strategy.”

RELATED: Prime Minister Mark Carney’s mandate letter calls for government to deploy AI “at scale”

Sutton claimed that part of humans’ role in the universe is to create intelligent machines that will design new things in turn, and that the development of a human-like intelligence is “inevitable.” The researcher had previously put the chances of developing something resembling artificial general intelligence (AGI) by 2030 at one in four, and by 2040 at one in two. Sutton also works as a research scientist at John Carmack’s Keen Technologies, where he signed a partnership in 2023 to focus on researching “human-like” intelligence.

In March, Richard Sutton won the Turing Award—colloquially known as the ‘Nobel Prize in Computing’—for his pioneering work in reinforcement learning (RL), a key technique used to train large-language models such as ChatGPT.

Sutton said we are entering a new era of AI where models and agents will primarily learn from their own experiences, rather than learning from human data. Some AI agents can already perform tasks independently without human oversight. 

“It’s a prediction, but I think the era of experience will be much more powerful,” Sutton said. 

Right now, Sutton said AI models are trained and fine-tuned on human-generated data, including text and images. Current AI models have consumed most of the high-quality data on the internet, he said. Several large LLM developers have been sued for their alleged unauthorized use of copyrighted data, including Toronto-based Cohere.

The natural next step, he said, is to train AI models and agents to learn through their own experiences and perceptions. This way, AI models could generate new knowledge rather than just regurgitate what it has been fed by humans.

Despite the existential tone of his talk, Sutton said he spends his days “in the scientific trenches” programming AI algorithms, and that philosophy is just a hobby. He said humans should embrace AI with “courage, pride, and a sense of adventure.” 

“We shouldn’t think about AI as a new and alien thing,” Sutton said. “Philosophers have sought to understand human intelligence for thousands of years,” and this is just a continuation of that, he added. 

RELATED: New Turing Award winner Richard Sutton calls doomers “out of line,” talks path to human-like AI

Sutton’s optimistic tone at Upper Bound stands in contrast to his fellow Turing Award winners, Yoshua Bengio and Geoffrey Hinton. While Bengio and Hinton have been outspoken about the dangers of unregulated AI and AGI development, Sutton has taken a different tack. 

Shortly after being named a Turing Award winner, Sutton told BetaKit he believes the AI doomers are “out of line and the concerns are overblown.” He expressed a fear that AI will become the scapegoat for the world’s problems. “I’m disappointed that my fellow researchers are playing into the way their field is possibly going to be demonized inappropriately,” he said.

Bengio said at World Summit AI that the development of AI agents programmed to preserve themselves could prove to be catastrophic. Despite his misgivings about AI regulation, though, Sutton told BetaKit he fears that LLM users will take what AI models generate and automatically believe it, despite the models being prone to errors and hallucinations. 

Rules around AI have already begun to take shape globally, with the European Union introducing the AI Act to ban certain harmful applications of the technology. Canada has yet to pass initial AI legislation regulating the technology, known as the Artificial Intelligence and Data Act, which would seek to ensure the safety and non-discrimination of AI systems. 

Feature image courtesy Alberta Machine Intelligence Institute.

0 replies on “Richard Sutton warns against “centralized control” of AI regulation based on fear”