Canadian OpenAI co-founder Ilya Sutskever launches new company focused on safe AI development

ssi_feature
New company follows the recent exodus of safety-focused employees at OpenAI.

Ilya Sutskever, one of the co-founders of and former chief scientist at OpenAI, has launched a new company aimed at building a lab for safe artificial intelligence (AI) development. 

The new company, called Safe Superintelligence (SSI), is American, but Sutskever is a Canadian citizen—having studied under AI godfather Geoffrey Hinton while attending the University of Toronto. Sutskever also built AlexNet, a neural network focused on image processing, in collaboration with Hinton and Alex Krizhevsky, which sold to Google in 2013.

“Our singular focus means no distraction by management overhead or product cycles.”

In addition to Sutskever, SSI’s other founders include Daniel Levy, a former member of technical staff at OpenAI, and Daniel Gross, Apple’s former AI lead. According to Sutskever’s LinkedIn, he will serve as both co-founder and chief scientist at SSI.

In a statement posted today on its new website, the company described itself as “the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence.”

“Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures,” the company wrote.

Superintelligence refers to a theoretical form of advanced AI that is smarter than the human brain. However, a number of AI experts including Canadian AI pioneers like Hinton and Yoshua Bengio, have repeatedly sounded the alarm about the threats that advanced AI systems could pose to humanity.

RELATED: Geoffrey Hinton, Yoshua Bengio warn “risk of extinction from AI” in public letter

Sutskever’s former company OpenAI recently formed an internal team focused on governing potential superintelligent AI systems. However, TechCrunch reported in May that team members alleged they were not given the adequate compute resources to carry out their work.

OpenAI has seen an exodus of employees who are known for their focus on AI safety in recent months. This group includes Sutskever, who left his role in May, Jan Leike, who was responsible for the development of ChatGPT and GPT-4, among other systems, and Daniel Kokotajlo, who recently told Vox he had “lost trust” in OpenAI leadership and the company’s ability to handle the safety risks of AI.

Sutskever’s last few months at OpenAI were contentious. Multiple outlets have reported he initially supported the board’s ouster of CEO Sam Altman late last year, but later supported Altman’s return to the top role.

In its statement today, SSI said it is positioning itself as a safety-first organization, noting it wants to “advance capabilities as fast as possible while making sure our safety always remains ahead.”

Isabelle Kirkwood

Isabelle Kirkwood

Isabelle is a Vancouver-based writer with 5+ years of experience in communications and journalism and a lifelong passion for telling stories. For over two years, she has reported on all sides of the Canadian startup ecosystem, from landmark venture deals to public policy, telling the stories of the founders putting Canadian tech on the map.

0 replies on “Canadian OpenAI co-founder Ilya Sutskever launches new company focused on safe AI development”