Ilya Sutskever, the co-founder and former chief scientist of OpenAI, has announced the launch of a new AI company called Safe Superintelligence Inc. (SSI). The startup’s primary objective is to develop a secure and robust AI system.
Sutskever emphasized that SSI will prioritize safety and capabilities simultaneously, enabling rapid advancement of its AI technology without being distracted by external pressures faced by other AI teams at companies like OpenAI, Google, and Microsoft.
“Our business model ensures that safety, security, and progress are protected from immediate commercial influences,” the statement explains. “This allows us to expand without disruption.”
Alongside Sutskever, SSI was founded by Daniel Gross, a former AI leader at Apple, and Daniel Levy, who previously served as a technical staff member at OpenAI.
Sutskever took the lead in removing OpenAI CEO Sam Altman last year. He departed from OpenAI in May and suggested that he was beginning a new project.
Shortly after Sutskever left, AI researcher Jan Leike resigned from OpenAI, expressing concerns that safety processes were being overshadowed by flashy products. Gretchen Krueger, a policy researcher at OpenAI, also cited safety worries when she announced her departure.
OpenAI’s collaborations with Apple and Microsoft indicate that SSI is unlikely to pursue similar partnerships in the near future.
In an interview with Bloomberg, Sutskever stated that SSI’s primary focus is on developing safe superintelligence as its first product, and the company will not engage in any other activities until this goal is achieved.
Stories You May Like