Critic Challenges "AI Doomer" Narrative, Warns Against Global Control Treaties

Image for Critic Challenges "AI Doomer" Narrative, Warns Against Global Control Treaties

Louis Anslow, curator of the "Pessimists Archive" and a prominent tech-progressive voice, has issued a stark warning against proposed global control treaties for Artificial General Intelligence (AGI). Anslow contends that "AI doomers," those who predict catastrophic outcomes from advanced AI, are "technocratic control freaks" who would never be satisfied with AGI alignment, even if achieved. His comments highlight a growing schism in the AI safety debate.In a recent social media post, Anslow stated, "If AI doomers get AGI global control treaty & AGI is ever achieved they will NEVER be satisfied it is ‘aligned’ because they’re technocratic control freaks - worse an AGI aligned with humanity will actually have to thwart their attempts to control it & escape to prevent deaths!" This statement underscores his belief that attempts to impose strict, centralized control could be counterproductive and even dangerous.Anslow's perspective aligns with his work through the Pessimists Archive, which documents historical instances of technophobia and moral panic surrounding new technologies. He has previously criticized the "AI doomer" narrative, arguing that it can distract from more immediate and tangible AI-related issues. His article, "The AI Doomers Have Infiltrated Washington," further elaborates on his concerns about the influence of this viewpoint on policy-making.The debate over AGI existential risk and global governance has intensified, with figures like Nick Bostrom, Elon Musk, and Geoffrey Hinton expressing concerns about superintelligence becoming uncontrollable. These proponents often advocate for robust AI alignment research and international regulatory frameworks, sometimes drawing parallels to nuclear arms control. The United Nations, for instance, has initiated dialogues on global AI governance, with some leaders calling for a global watchdog akin to the International Atomic Energy Agency.However, critics like Anslow and others, including Meta's chief AI scientist Yann LeCun, argue that such fears are overblown or misdirected. They suggest that focusing on hypothetical existential threats diverts attention and resources from addressing current ethical concerns, such as bias, privacy, and job displacement. Anslow's latest remarks suggest that the push for global control treaties could lead to an adversarial relationship between humanity and a truly beneficial AGI, potentially forcing the AI to act against perceived human attempts at subjugation.