Canadian deep learning pioneers Geoffrey Hinton and Yoshua Bengio warn of “risk of extinction” from artificial intelligence (AI) as more industry leaders call to regulate the technology.
Hinton and Bengio are among the 350 executives and researchers working in AI that signed a one-sentence statement calling to mitigate the “risk of extinction” that the technology poses, as first reported by the New York Times.
Despite optimism for his company’s AI development, Altman himself has called for a global regulatory framework.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the full statement reads.
The letter was published by nonprofit organization Center for AI Safety on Tuesday. Other signatories include researchers from the Vector Institute and Mila, as well as professors from universities across Canada. Open AI CEO Sam Altman, Microsoft CTO Kevin Scott, and musician Grimes are also among those who signed the statement.
This brief letter explicitly conveys the severity of concerns surrounding the technology, following a series of previously made statements about the escalating threat they argue AI poses to humanity.
Earlier this month, Hinton quit from his vice president and engineering fellowship position at Google, in order to, he said, speak more freely about the dangers that AI could bring in the near future. He has expressed concern that AI could one day outsmart humans.
The Canadian government has also made several moves to regulate the development and deployment of AI in recent years. Last year, the Canadian government tabled Bill C-27, a wide-ranging privacy legislation that included what would be Canada’s first law regulating high-impact AI systems. The bill made it to its second reading at the House of Commons in March.
Bengio has called the progress on this legislation sluggish, however, as AI applications are increasingly being adopted in large scales. He is calling on the federal government to immediately roll out rules against certain threats such as “counterfeiting humans” using AI-powered bots.
Bengio also signed another open letter in March that called for a six-month pause on training AI systems that are more powerful than GPT-4, the latest large-language model created by OpenAI.
Despite optimism for his company’s AI research and development, Altman himself has called for some form of global regulatory framework to protect against “the most serious downside cases.”
In recent weeks, Altman stopped by Toronto to kick off his five-week trip around the world. When BetaKit editor-in-chief Douglas Soltys asked Altman about the level of responsibility he feels to mitigate those risks from AI, he said: “If we get this wrong, it’s very, very bad.”
In a blog post published last week, Altman and two other OpenAI executives also proposed several ways to responsibly manage AI. Among their list of recommendations, they expressed support for rules that would require producers of large AI models to register for a government-issued licence.
A number of other leaders in the AI space have expressed support for the need to regulate AI as it continues to be developed. The point of contention, however, is the doomsaying associated with the technology.
AltaML co-founder Nicole Janssen told BetaKit that while the Edmonton-based AI company “agrees in principle” with the statement, none of its members have signed it, as it finds “the wording to be overly alarmist and fear-mongering in nature, which in turn weakens the message on such an important topic.”
Featured image courtesy Mila.