California's SB53 Shifts AI Regulation Focus to Transparency, Avoiding Liability for Catastrophic Harm

Image for California's SB53 Shifts AI Regulation Focus to Transparency, Avoiding Liability for Catastrophic Harm

SACRAMENTO – California's legislative efforts to regulate artificial intelligence are taking a new direction with Senate Bill 53 (SB53), championed by Senator Scott Wiener. The updated bill, which recently passed the New York State Legislature, moves away from the prescriptive "do no harm" mandate of its predecessor, SB1047, towards a transparency-focused approach. This shift aims to balance innovation with necessary oversight, a strategy endorsed by AI policy experts like Ben Brooks.

Brooks, a fellow at the Berkman Klein Center and former Head of Public Policy for Stability AI, articulated his reasoning on social media, stating, "SB1047 was a bad idea. But Sen. Wiener's latest SB53 is on the right track, and it's important to call out the progress." He advocates for regulating transparency rather than prescribing risk thresholds or standards of care for model development, arguing that the latter could "chill open innovation by exposing developers to vague or heightened liability."

The core of SB53's approach is to "shine a light on industry practices," requiring developers to commit to safety and security policies, document their processes, and maintain a clear paper trail. This allows for better assessment of claims, monitoring of emerging risks, and informed decisions on future interventions. This mirrors the transparency-centric provisions found in the European Union's AI Act, which requires general-purpose AI models to disclose AI-generated content, prevent illegal content generation, and publish summaries of copyrighted training data.

In contrast, SB1047, vetoed by Governor Gavin Newsom, sought to impose liability on AI developers for "critical harms," defined as significant economic damage or mass casualties. This approach was met with considerable opposition from the tech industry, which argued it was too burdensome and could stifle innovation. Brooks noted that SB1047, along with similar proposals like New York's RAISE Act, were "too far over their skis" in their attempts to prescribe capabilities and mitigations.

New York's recently passed Responsible AI Safety and Education (RAISE) Act also emphasizes transparency for frontier AI models, requiring developers to publish detailed safety and security reports and report incidents to regulators. However, unlike the earlier iteration of California's SB1047, the RAISE Act does not include "kill switch" requirements or hold companies liable for post-training misuse, focusing instead on a $100 million compute cost threshold for covered models.

While SB53 represents a promising trajectory, Brooks cautions about potential "icebergs ahead," including the complexity of documentation and reporting obligations, and the perverse incentive for developers to under-test models if public reporting of voluntary risk assessments is mandatory. He also warns against California's "gut-and-amend culture," hoping SB53 does not morph back into a standard-of-care bill. Despite these concerns, the bill's evolution reflects a thoughtful engagement with feedback, aiming for a regulatory framework that provides oversight without hindering open development.