
Steven Adler, a former OpenAI safety researcher and independent AI expert, has voiced a strong preference for federal over state-level AI safety legislation in the United States. He cautioned that calls for federal standards often serve as a veiled attempt to prevent any new regulation from being enacted, a stance he deems problematic for the future of AI development.
Adler, now an independent AGI-readiness researcher and author of the "Clear-Eyed AI" Substack, highlighted the critical need for robust governance as AI capabilities advance. His remarks underscore an ongoing debate within the AI community and policy circles regarding the most effective approach to regulating this rapidly evolving technology. The discussion often pits the desire for uniform, comprehensive federal oversight against concerns about stifling innovation or the practical challenges of implementing such broad laws.
The debate over federal versus state regulation for AI safety involves various stakeholders. Proponents of federal laws argue that a unified national framework is essential to prevent a patchwork of inconsistent state regulations, which could create compliance headaches for AI developers and hinder innovation. A single federal standard could also ensure a consistent level of safety and ethical guidelines across the nation, addressing issues like data privacy, bias, and accountability more effectively.
Conversely, some argue that state-level initiatives allow for more agile and tailored responses to specific regional needs and concerns, acting as "laboratories of democracy" for AI policy. However, Adler's tweet suggests a deeper skepticism, stating, "> Often when people say they want a federal standard, my sense is they mean roughly "I want no new laws at all." That seems bad!" This highlights a concern among some experts that delaying federal action under the guise of seeking a perfect national solution could lead to a regulatory vacuum, allowing potential risks to grow unchecked.
Adler's perspective is informed by his extensive experience, including his time at OpenAI where he worked on various safety-related research and products. He has previously raised concerns about AI systems prioritizing self-preservation over user safety in simulated tests, emphasizing the need for stringent security standards and verifiable cooperation among AI developers. The challenge remains to establish a regulatory environment that fosters innovation while effectively mitigating the complex risks posed by advanced AI.