London, UK – The landscape of artificial intelligence development is undergoing a significant shift, marked by the dissolution of OpenAI's dedicated "Superalignment" team less than a year after its formation. This development comes as leading tech giants aggressively pursue advanced AI capabilities, fueling a high-stakes talent war and prompting renewed debate over the industry's approach to existential risks.
The Superalignment team, established by OpenAI in July 2023 with a goal to solve the challenge of controlling superintelligent AI within four years, was reportedly disbanded in May 2024. This internal restructuring at one of the world's foremost AI research organizations has raised questions about the industry's commitment to mitigating long-term, catastrophic risks from increasingly powerful AI systems.
Prominent AI researcher Eliezer Yudkowsky, a long-standing critic of what he terms the "Pretend Very Serious people" who have historically influenced AI safety funding, recently commented on the evolving situation. In a social media post, Yudkowsky stated, "For the Pretend Very Serious people who controlled ~all funding in EA and 'AI safety' for ~15 years, a verbatim prediction of this headline would have been treated with deep contempt, as proof you were not Very Serious like them. Reality was out of their bounds." His remarks underscore a deep-seated skepticism regarding the mainstream AI community's ability or willingness to prioritize safety over rapid advancement.
The dissolution of OpenAI's specialized safety unit coincides with Meta's ambitious launch of "Superintelligence Labs," aiming to develop AI systems that surpass human capabilities. This initiative, led by high-profile talent reportedly lured with offers up to $300 million over four years, has intensified an already fierce competition for top AI engineers. OpenAI CEO Sam Altman has publicly criticized Meta's aggressive recruitment tactics, describing them as "distasteful."
Meanwhile, discussions around AI governance appear to be pivoting from a focus on existential threats to a more action-oriented approach, as evidenced by a shift from "AI Safety Summits" to "AI Action Summits" in early 2025. This re-prioritization, along with the "AI Safety Clock" moving from 29 minutes to midnight in September 2024 to 24 minutes to midnight by February 2025, suggests a growing urgency that some argue is still insufficient. Concerns persist regarding the societal impact of rapidly advancing AI, including job displacement, ethical dilemmas, and the potential for AI systems to exhibit deceptive behaviors, as highlighted by recent studies from Apollo Research and others.