“Godfather of deep learning” Geoffrey Hinton quits Google to warn against dangers of AI

Hinton left his decade-long tenure at Google to speak openly about new AI fears.

Geoffrey Hinton, who has been internationally distinguished for his work on artificial intelligence (AI), has resigned from Google to warn about the dangers of the technology.

Speaking with The New York Times, the British-Canadian computer scientist explained that he quit his decade-long career at Google to speak more freely about the potential risks of AI.

Hinton worries about how the pace of AI development could surpass people’s capabilities to control it.

Hinton, who was a VP and engineering fellow at Google, is just the latest globally-recognized AI leader to speak about concerns related to AI. In recent months, two open letters, led in part by Yoshua Bengio and with signatures from thousands of tech leaders and researchers, have been published to highlight the rapid pace of AI development and how it could pose threats to society.

Hinton told the New York Times that he didn’t sign either of those letters as he didn’t want to publicly criticize Google or other companies until he resigned.

As companies race to advance their AI systems, Hinton’s concern is that the heated competition in generative AI could lead to spread of misinformation through false photos, videos, and text, on the web, which has already manifested on social media.

In 2012, Hinton and two of his students at the University of Toronto (U of T) built a neural network that could analyze thousands of photos and teach itself to identify objects, such as flowers, dogs and cars. The group was incorporated, and was later sold to Google for $44 million. This system would lead to the creation of advanced AI platforms, like chatbots ChatGPT and Google’s Bard.

Generative AI applications like Midjourney and OpenAI’s ChatGPT have faced public criticism within the last year for enabling the spread of misinformation on the internet. Researchers have predicted that this type of technology could make the production and promotion of false information cheaper and more efficient.

In addition to misinformation, generative AI also offers implications for users’ privacy, and copyright infringement, among other risks from AI-generated content.

The Canadian government has made several moves to regulate the development and deployment of AI in recent years. Last year, the Canadian government tabled Bill C-27, a wide-ranging privacy legislation that included what would be Canada’s first law regulating high-impact AI systems. The bill made it to its second reading at the House of Commons in March.

RELATED: Yoshua Bengio, major tech leaders call for six-month pause on advanced AI development in open letter

Canada’s privacy commissioner also recently launched an investigation into Open AI in April, in response to a “complaint alleging the collection, use, and disclosure of personal information without consent.”

Hinton has notable ties to Canada where he made several breakthroughs in deep learning.

Before moving his research to U of T, Hinton was based at Pennsylvania, working as a professor for Carnegie Mellon University. He told the Times that he left the United States for Canada as he would have had to take Pentagon funding, and he is opposed to AI use on the battlefield. At that time, most AI research in the US was backed by the defense department.

Hinton is one of three notable AI leaders in Canada alongside Bengio and the University of Alberta’s Richard Sutton. Hinton is a founding member of the Vector Institute in Toronto, while Bengio helped create Montréal’s Mila. Sutton recently left Alphabet-owned DeepMind after it shuttered its Edmonton office.

RELATED: Canadian privacy commissioner launches investigation into ChatGPT

Bengio and Hinton were also named recipients of the 2018 Turing Award, alongside Yann LeCun. The latter is Facebook’s AI lead and has also spoken out about ChatGPT.

In a statement shared with MIT Technology Review, Bengio said Hinton deserves the “greatest credit” for many of the ideas that underpin modern deep learning. “I assume this also makes him feel a particularly strong sense of responsibility in alerting the public about potential risks of the ensuing advances in AI.”

As AI systems continue to be improved exponentially, Hinton worries about how the pace of their development could surpass people’s capabilities to control it.

“The idea that this stuff could actually get smarter than people — a few people believed that,” he told the Times. “But most people thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”

Several reactions to Hinton’s exit compare him to other notable tech innovators in the past who have expressed regret for their inventions.

John Ruffolo, founder and managing partner of Maverix Equity Partners, compared the nature of Hinton’s departure from Google to German-born physicist Albert Einstein’s contribution to the creation of atomic bombs. “This departure reminds me of the concerns Einstein had when he realized his life’s work was being used by bad actors,” Ruffolo wrote on Twitter.

Gerald Butts, former principal secretary to Prime Minister Justin Trudeau, likened Hinton to the fictional story of Victor Frankenstein, who was responsible for creating a monster that murdered people. “When Geoffrey Hinton starts to sound like Victor Frankenstein it’s time to worry,” Butts said.

Others have taken Hinton’s resignation as a prompt for expanding debates around regulating the technology.

“The fact he’s ending his work with Google so he can speak about the risks of AI, should make regulators pay immediate attention to the existential threats he clearly believes exist,” wrote tech entrepreneur and former journalist Miro Cernetig, in a Tweet.

Charlize Alcaraz

Charlize Alcaraz

Charlize Alcaraz is a journalism student at Toronto Metropolitan University and a staff writer for BetaKit. Follow her on Twitter @charlizealcaraz

0 replies on ““Godfather of deep learning” Geoffrey Hinton quits Google to warn against dangers of AI”