The proposed federal moratorium on state-level artificial intelligence (AI) regulation has decisively failed in the U.S. Senate, signaling a continued landscape of varied state-led AI policies. Adam Thierer, a prominent voice in AI policy, marked this development with a post-mortem, stating in a recent tweet, > "The AI Regulatory Moratorium Fails: What Comes Next?" and linking to his analysis of the policy battle. This outcome ensures that states will retain their authority to enact and enforce AI-specific laws.
The moratorium, initially passed by the House of Representatives as part of a larger budget bill, aimed to impose a 10-year ban on states from creating or enforcing laws related to AI models, systems, or automated decision-making. Proponents argued this federal preemption was necessary to prevent a fragmented regulatory environment that could hinder U.S. innovation and competitiveness, particularly against nations like China. The measure was intended to provide a uniform regulatory landscape for the burgeoning AI industry.
However, the provision faced significant opposition in the Senate, largely due to procedural challenges related to its inclusion in a budget reconciliation bill and concerns over states' rights. Despite attempts to revise the moratorium, the Senate ultimately voted almost unanimously, 99-1, to remove it from the bill. This effectively 100-0 vote underscored a broad bipartisan consensus against the federal overreach, with even some original supporters backing off.
The failure of the federal moratorium means that the ongoing trend of state-level AI regulation will persist and likely accelerate. States such as California, Colorado, and New York have been at the forefront, introducing and passing legislation addressing various aspects of AI, including transparency, discrimination, and safety. For instance, Colorado's Senate Bill 205 mandates disclosures and guardrails for "high-risk" AI, while California's Assembly Bill 2013 focuses on transparency in AI training datasets.
This outcome is a significant development for the tech industry, which has expressed concerns about navigating a patchwork of differing state laws. While some large AI developers initially supported a federal moratorium to streamline compliance, others, like Anthropic, the maker of the Claude AI chatbot, have advocated for lightweight, flexible government rules and transparency requirements. The debate now shifts back to how states will continue to shape AI governance in the absence of a unifying federal framework, leaving companies to adapt to a diverse regulatory landscape.