California's Senate Bill 53 (SB 53), known as the Transparency in Frontier Artificial Intelligence Act (TFAIA), is advancing through the legislature, introducing a unique legal mechanism for regulating large AI model developers. This landmark bill aims to address catastrophic risks associated with advanced artificial intelligence, defining them as events that could foreseeably lead to the death of 50 or more people or over one billion dollars in damages. The legislation mandates that "large frontier developers" — those with over $500 million in annual revenue and models trained with significant computing power — establish and publish robust safety protocols and transparency reports.
A particularly novel aspect of SB 53, as highlighted by Dean W. Ball, a senior fellow at the Foundation for American Innovation, is its provision allowing companies to comply with state requirements by adhering to designated federal laws, regulations, or guidance. > "it is rare that a state law introduces a genuinely novel legal mechanism, but the latest version of California's frontier AI safety bill, SB 53, does just that," Ball stated in a social media post. This mechanism provides a pathway for companies to opt into state compliance through federal alternatives, signaling a potential desire from California for broader federal AI standards.
Under SB 53, covered developers must clearly and conspicuously publish a "frontier AI framework" detailing their approach to managing, assessing, and mitigating catastrophic risks. This includes procedures for testing models, defining thresholds for dangerous capabilities, and outlining mitigation strategies. The bill also requires reporting of "critical safety incidents" to the California Office of Emergency Services within 15 days of discovery, with civil penalties for non-compliance potentially reaching up to $1 million per violation.
The bill has garnered significant attention, including an official endorsement from AI developer Anthropic. While Anthropic expressed a preference for federal regulation, the company stated, "powerful AI advancements won’t wait for consensus in Washington," and views SB 53 as a "solid blueprint for AI governance." This support comes despite lobbying efforts against the bill by some major tech groups, who argue for federal oversight to avoid a patchwork of state regulations.
SB 53 represents a more refined approach compared to previous legislative attempts in California, such as the vetoed SB 1047. Recent amendments to SB 53 notably removed a requirement for third-party audits, addressing some industry concerns about burdensome regulations. Supporters like Dean Ball suggest that the bill's drafters have shown "respect for technical reality" and "legislative restraint," increasing its chances of becoming law and potentially influencing future federal discussions on AI governance.