OpenAI dissolves team focused on long-term AI risks, less than one year after announcing it
Tune in to CNBC all week to hear from the 2024 Disruptor 50 companies
OpenAI has disbanded its team focused on the long-term risks of artificial intelligence just one year after the company announced the group, a person familiar with the situation confirmed to CNBC on Friday.
The person, who spoke on condition of anonymity, said some of the team members are being reassigned to multiple other teams within the company.
The news comes days after both team leaders, OpenAI co-founder Ilya Sutskever and Jan Leike, announced their departures from the Microsoft-backed startup. Leike on Friday wrote that OpenAI's "safety culture and processes have taken a backseat to shiny products."
OpenAI's Superalignment team, announced last year, has focused on "scientific and technical breakthroughs to steer and control AI systems much smarter than us." At the time, OpenAI said it would commit 20% of its computing power to the initiative over four years.
OpenAI did not provide a comment and instead directed CNBC to co-founder and CEO Sam Altman's recent post on X, where he shared that he was sad to see Leike leave and that the company had more work to do. On Saturday, OpenAI co-founder Greg Brockman posted a statement attributed to both himself and Altman on X, asserting that the company has "raised awareness of the risks and opportunities of AGI so that the world can better prepare for it."
News of the team's dissolution was first reported by Wired.
Sutskever and Leike on Tuesday announced their departures on social media platform X, hours apart, but on Friday, Leike shared more details about why he left the startup.
"I joined because I thought OpenAI would be the best place in the world to do this research," Leike wrote on X. "However, I have