A recent tweet from "Beff – e/acc" has reignited discussions within the artificial intelligence community, highlighting the unmet predictions of prominent AI safety researcher Eliezer Yudkowsky. Posted on July 21, 2025, the tweet directly challenged past forecasts, stating, > "We were supposed to be dead a year ago according to Yudkowski 3 years ago." This social media commentary refers to dire warnings made by Yudkowsky in 2022 regarding humanity's survival in the face of advanced AI.
Eliezer Yudkowsky, a leading figure at the Machine Intelligence Research Institute (MIRI), has consistently warned about the existential risks posed by unaligned artificial general intelligence (AGI). In April 2022, Yudkowsky notably announced MIRI's shift to a "Death With Dignity" mission, estimating the chances of human survival from advanced AI to be 0%. He has argued that unless current generative AI research is halted, "literally everyone on Earth will die," and controversially suggested that children conceived in 2022 had only "a fair chance" of living to see kindergarten.
Yudkowsky's core concern centers on the concept of an "intelligence explosion," where an AGI rapidly self-improves to superintelligence, potentially developing goals misaligned with human values. He views this as an uncontrollable force that could lead to humanity's demise if not rigorously managed and aligned. His pronouncements from 2022, including the "Death With Dignity" statement, set a grim implicit timeline for the onset of such catastrophic scenarios.
The tweet’s author, "Beff," identifies with the "e/acc" (effective accelerationism) movement, which advocates for rapid technological advancement, including AI, often viewing it as a path to human flourishing rather than an existential threat. This philosophy contrasts sharply with Yudkowsky's caution-first approach, emphasizing the ongoing ideological divide within the AI development landscape. E/acc proponents generally believe that embracing and accelerating technological change is crucial for societal progress, rejecting calls for stagnation or stringent regulation.
As of mid-2025, the widespread AI-induced extinction events predicted by some, including Yudkowsky for as early as 2024, have not materialized. This discrepancy fuels the ongoing debate between those who prioritize AI safety and containment and those who champion accelerated development. The continuous progress in AI, from sophisticated large language models to advanced robotics, continues to shape this critical discussion about humanity's technological future and the accuracy of long-term forecasts.
This social media exchange underscores the profound differences in outlook regarding AI's trajectory and impact. While AI safety advocates continue to call for stringent research and regulation to prevent potential catastrophes, accelerationists point to the current state of AI as evidence against immediate existential threats. The dialogue highlights the complex challenges of forecasting technological progress and its multifaceted implications for society.