Ilya Sutskever, one of the co-founders of and former chief scientist at OpenAI, has launched a new company aimed at building a lab for safe artificial intelligence (AI) development.
The new company, called Safe Superintelligence (SSI), is American, but Sutskever is a Canadian citizenâhaving studied under AI godfather Geoffrey Hinton while attending the University of Toronto. Sutskever also built AlexNet, a neural network focused on image processing, in collaboration with Hinton and Alex Krizhevsky, which sold to Google in 2013.
âOur singular focus means no distraction by management overhead or product cycles.â
In addition to Sutskever, SSIâs other founders include Daniel Levy, a former member of technical staff at OpenAI, and Daniel Gross, Appleâs former AI lead. According to Sutskeverâs LinkedIn, he will serve as both co-founder and chief scientist at SSI.
In a statement posted today on its new website, the company described itself as âthe worldâs first straight-shot SSI lab, with one goal and one product: a safe superintelligence.”
âOur singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures,â the company wrote.
Superintelligence refers to a theoretical form of advanced AI that is smarter than the human brain. However, a number of AI experts including Canadian AI pioneers like Hinton and Yoshua Bengio, have repeatedly sounded the alarm about the threats that advanced AI systems could pose to humanity.
RELATED: Geoffrey Hinton, Yoshua Bengio warn ârisk of extinction from AIâ in public letter
Sutskeverâs former company OpenAI recently formed an internal team focused on governing potential superintelligent AI systems. However, TechCrunch reported in May that team members alleged they were not given the adequate compute resources to carry out their work.
OpenAI has seen an exodus of employees who are known for their focus on AI safety in recent months. This group includes Sutskever, who left his role in May, Jan Leike, who was responsible for the development of ChatGPT and GPT-4, among other systems, and Daniel Kokotajlo, who recently told Vox he had âlost trustâ in OpenAI leadership and the companyâs ability to handle the safety risks of AI.
Sutskeverâs last few months at OpenAI were contentious. Multiple outlets have reported he initially supported the boardâs ouster of CEO Sam Altman late last year, but later supported Altmanâs return to the top role.
In its statement today, SSI said it is positioning itself as a safety-first organization, noting it wants to âadvance capabilities as fast as possible while making sure our safety always remains ahead.â