San Francisco, CA – Jack Clark, Co-founder and Head of Policy at Anthropic, announced the release of a comprehensive framework designed to guide the development of a thriving artificial intelligence (AI) ecosystem. The framework, detailed in a recent social media post, is intended for application at federal, state, and international levels, contributing specific ideas to the ongoing public conversation about AI governance.
"We’ve written this framework in such a way that it could be applied at federal, state, or international levels, and are sharing it to try and contribute more specific ideas to the public conversation around building a thriving AI ecosystem," Clark stated in the tweet. This initiative underscores Anthropic's commitment to shaping responsible AI development.
The core of Anthropic's proposal advocates for judicious, narrowly-targeted regulation to balance the transformative benefits of AI with the mitigation of significant risks. The company emphasizes the urgency of governmental action, noting the rapid advancements in AI capabilities, particularly in areas like cyber capabilities and the potential for misuse in chemical, biological, radiological, and nuclear (CBRN) domains.
Central to the framework are three key principles for effective AI regulation: transparency, incentivizing better safety and security practices, and simplicity with a clear focus. Anthropic suggests requiring companies to publish Responsible Scaling Policies (RSPs) and risk evaluations for new AI systems, creating a public record of risks and best practices. These RSPs, which Anthropic has formally implemented since September 2023, serve as adaptive frameworks for identifying, evaluating, and mitigating catastrophic risks, with safety measures increasing proportionally to AI system capabilities.
The framework is designed to be flexible, acknowledging the fast-evolving nature of AI technology. Anthropic believes that while federal legislation would be ideal for uniform application, state-level regulation could serve as a necessary backstop given the urgency. The company also highlights that such standardized approaches, particularly across international borders, could ultimately reduce the overall cost of doing business for AI firms.
By sharing this framework, Anthropic aims to foster a collaborative environment where policymakers, the AI industry, safety advocates, and civil society work together. The ultimate goal is to ensure that AI technologies are developed and deployed in a manner that maximizes societal benefits while proactively addressing potential catastrophic risks, thereby building a secure and prosperous AI-driven future.