Ilya Sutskever, the co-founder and former chief scientist of OpenAI, introduced his new AI startup called Safe Superintelligence, or SSI.
Sutskever was one of the board members behind the ousting of former OpenAI CEO Sam Altman. Despite expressing regret for his role in the board's actions, Sutskever remains dedicated to the development of AI technologies and their potential to benefit society.
Company Structure and Vision
SSI aims to pursue safe superintelligence with a singular focus, one goal, and one product. Sutskever's vision for SSI is to create a powerful AI system while prioritizing safety and security.
To bring his vision to life, Sutskever has partnered with Daniel Gross, who was previously one of the AI leads at Apple, and Daniel Levy, a former member of the technical staff at OpenAI.
The annoucement describes SSI as a startup that “approaches safety and capabilities in tandem”. Unlike many AI teams in other companies like OpenAI, Google, and Microsoft which have to face outside pressure, SSI's singular focus allows it to avoid distractions from management overhead or product cycles.
OpenAI's dedication to its mission of developing safe superintelligence, uninfluenced by short-term financial motives, sets it apart. While other companies have formed partnerships with major tech players like Apple and Microsoft, collaborations with giant AI firms may not be on the horizon.
The company's emphasis on creating a safe and powerful AI system aligns with broader efforts to ensure responsible AI development. As AI technologies continue to evolve and impact various industries, SSI's contributions will be instrumental in setting safety standards and driving innovation.