Concerns have been raised regarding the alleged influence of the Effective Altruism (EA) movement and its "AI-doomerism" perspective within the US government, particularly through the actions of RAND Corporation CEO Jason Matheny. According to a recent social media post by Melinda B. Chu, Matheny, a former senior Biden official, is accused of "inserting personnel who share his AI-doomerism worldview" into government and government contractor roles. This follows a 2017 speech at an effective altruism forum where Matheny reportedly laid out his vision for AI regulation.
The allegations, which cite a former Defense Department official familiar with the industry’s leaders, suggest a deliberate strategy by Matheny to influence the development and regulation of artificial intelligence. As stated in the tweet, Matheny "has been very deliberate about inserting personnel who share his AI-doomerism worldview into government and government contractor roles." This initiative is reportedly driven by a desire to reduce AI risks, a core priority for a segment of the effective altruism movement.
Effective Altruism is a philosophical and social movement that applies evidence and reason to determine the most effective ways to improve the world. A significant portion of the EA community focuses on AI safety, advocating for research and policy interventions to mitigate potential existential risks from advanced AI. Critics, however, sometimes use the term "AI doomerism" to describe those with extreme concerns, suggesting their views are overly pessimistic or alarmist.
While the tweet author, Melinda B. Chu, expressed a strong critical view, stating "EA is more like a cult," both Matheny and other prominent figures like Dario Amodei, CEO of Anthropic, have publicly distanced themselves from the "effective altruism" label. Matheny has stated in recent interviews that while he shares some common goals, he does not identify with the movement as a whole. Similarly, Amodei has publicly stated a shift in focus from broad EA principles to specific AI safety research within his company, despite his foundational concerns about AI risk aligning with early EA thinking.
The RAND Corporation, a non-profit global policy think tank, conducts extensive research on artificial intelligence, focusing on its implications for national security and societal well-being. RAND's public stance on AI regulation generally advocates for a balanced approach, emphasizing robust governance frameworks and international cooperation to mitigate risks without stifling innovation. The current debate underscores the ongoing tension between different approaches to AI safety and the influence of various perspectives on government policy.