Months after resigning from AI designer OpenAI, previous chief researcher Ilya Sutskever’s brand-new endeavor Safe Superintelligence (SSI) has actually raised $1 billion in financing, the business revealed on Wednesday.
According to SSI, the financing round consisted of financial investments from NFDG, a16z, Sequoia, DST Global, and SV Angel. Reuters, mentioning sources “near the matter,” reported that SSI is currently valued at $5 billion.
SSI is constructing a straight shot to safe superintelligence.
We have actually raised $1B from NFDG, a16z, Sequoia, DST Global, and SV Angel.
We’re employing: https://t.co/DmFWnrc1Kr
— SSI Inc. (@ssi) September 4, 2024
” Mountain: recognized,” Sutskever tweeted on Wednesday. “Time to climb up.”
Safe Superintelligence did not right away react to an ask for remark from Decrypt
In Might, Sutskever and Jan Leike resigned from OpenAI following the departure of Andrej Karpathy in February. In a post on Twitter, Leike mentioned an absence of resources and security focus as the factor for his choice to leave the ChatGPT designer.
” Stepping far from this task has actually been among the hardest things I have actually ever done,” Leike composed. “Since we urgently require to find out how to guide and manage AI systems much smarter than us.”
Sutskever’s departure came, according to a report by The New York City Times, after he led the OpenAI board and a handful of OpenAI executives to oust co-founder and CEO Sam Altman in November 2023. Altman was restored a week later on.
In June, Sutskever revealed the launch of his brand-new AI advancement business, Safe Superintelligence Inc., which was co-founded by Daniel Gross, a previous AI lead at Apple, and Daniel Levy, who likewise formerly operated at OpenAI.
According to Reuters, Sutskever works as SSI’s primary researcher, with Levy as primary researcher, and Gross dealing with calculating power and fundraising.
” SSI is our objective, our name, and our whole item roadmap, since it is our sole focus,” Safe Superintelligence composed on Twitter in June. “Our group, financiers, and company design are all lined up to attain SSI.”
With generative AI ending up being more common, designers have actually searched for methods to ensure customers and regulators that their items are safe.
In August, OpenAI and Claude AI designer Anthropic revealed arrangements with the U.S. Department of Commerce’s National Institute of Standards and Innovation (NIST) to develop official cooperations with the U.S. AI Security Institute (AISI) that would provide the company access to significant brand-new AI designs from both business.
” We more than happy to have actually reached an arrangement with the U.S. AI Security Institute for pre-release screening of our future designs,” OpenAI co-founder and CEO Sam Altman composed on Twitter. “For numerous factors, we believe it is very important that this occurs at the nationwide level. [The] U.S. requires to continue to lead.”
Usually Smart Newsletter
A weekly AI journey told by Gen, a generative AI design.