OpenAI’s Superalignment Team Dissolved Amid Internal Struggles

In summer 2023, OpenAI created a team called “Superalignment” to control future AI systems that could potentially lead to human extinction. Less than a year later, this team has been dissolved. According to Bloomberg, the company is integrating the safety group more deeply into its research efforts. However, Jan Leike, a leader of the team who recently quit, revealed on social media that there were internal tensions and resource struggles. Leike criticized OpenAI for prioritizing products over safety and mentioned that his team was increasingly struggling to get crucial AI safety research done.

The dissolution follows the departure of key figures like Chief Scientist Ilya Sutskever, who was also a co-founder of OpenAI. Sutskever left six months after being part of a controversial decision to fire and then reinstate CEO Sam Altman. The Superalignment team faced ongoing challenges, including the dismissal of two researchers in April 2024 for allegedly leaking information.

OpenAI announced that future safety efforts will be led by co-founder John Schulman, with Jakub Pachocki replacing Sutskever as Chief Scientist. Despite the disbandment, OpenAI has introduced a new “preparedness” team to manage potential AI risks, highlighting the company’s ongoing commitment to AI safety, albeit through different organizational structures.

https://www.engadget.com/the-openai-team-tasked-with-protecting-humanity-is-no-more-183433377.html?src=rss