Ilya Sutskever, one of the co-founders of and former chief scientist at OpenAI, has launched a new company aimed at building a lab for safe artificial intelligence (AI) development.
The new company, called Safe Superintelligence (SSI), is American, but Sutskever is a Canadian citizenāhaving studied under AI godfather Geoffrey Hinton while attending the University of Toronto. Sutskever also built AlexNet, a neural network focused on image processing, in collaboration with Hinton and Alex Krizhevsky, which sold to Google in 2013.
āOur singular focus means no distraction by management overhead or product cycles.ā
In addition to Sutskever, SSIās other founders include Daniel Levy, a former member of technical staff at OpenAI, and Daniel Gross, Appleās former AI lead. According to Sutskeverās LinkedIn, he will serve as both co-founder and chief scientist at SSI.
In a statement posted today on its new website, the company described itself as āthe worldās first straight-shot SSI lab, with one goal and one product: a safe superintelligence.”
āOur singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures,ā the company wrote.
Superintelligence refers to a theoretical form of advanced AI that is smarter than the human brain. However, a number of AI experts including Canadian AI pioneers like Hinton and Yoshua Bengio, have repeatedly sounded the alarm about the threats that advanced AI systems could pose to humanity.
RELATED: Geoffrey Hinton, Yoshua Bengio warn ārisk of extinction from AIā in public letter
Sutskeverās former company OpenAI recently formed an internal team focused on governing potential superintelligent AI systems. However, TechCrunch reported in May that team members alleged they were not given the adequate compute resources to carry out their work.
OpenAI has seen an exodus of employees who are known for their focus on AI safety in recent months. This group includes Sutskever, who left his role in May, Jan Leike, who was responsible for the development of ChatGPT and GPT-4, among other systems, and Daniel Kokotajlo, who recently told Vox he had ālost trustā in OpenAI leadership and the companyās ability to handle the safety risks of AI.
Sutskeverās last few months at OpenAI were contentious. Multiple outlets have reported he initially supported the boardās ouster of CEO Sam Altman late last year, but later supported Altmanās return to the top role.
In its statement today, SSI said it is positioning itself as a safety-first organization, noting it wants to āadvance capabilities as fast as possible while making sure our safety always remains ahead.ā