Omidyar Network's recent acquisition of shares in artificial intelligence firm Anthropic has ignited debate within the tech policy community, drawing accusations of hypocrisy and concerns over market competition. The philanthropic investment, which involved purchasing just under 50,000 Anthropic shares from FTX's bankruptcy proceedings, was accompanied by a public statement from Omidyar Network justifying the move. Adam Kovacevich, a prominent voice in tech policy, criticized the action on social media, tweeting that Omidyar "clearly anticipated that this would: A) expose them as hypocrites, and B) raise questions about Omidyar supporting policies that hurt Anthropic’s competitors."
Omidyar Network, a philanthropic investment firm, defended its investment by stating its commitment to supporting the safe and responsible development of generative AI. Mike Kubzansky, CEO of Omidyar Network, emphasized that the purchase reflects their belief in "investing in generative AI that protects and promotes the public interest and prioritizes the long-term benefit of humanity." He highlighted Anthropic's structure as a Public Benefit Corporation and its unique "long-term benefit trust" governance model, which aims to buffer against direct investor influence and reinforce safety priorities.
The criticism from Kovacevich and others stems from a broader tension in the AI policy landscape. Debates around AI regulation, particularly proposals for licensing regimes for advanced AI models, have raised concerns that such policies could inadvertently favor well-resourced, established companies like Anthropic and OpenAI. Critics argue that these regulations might create significant barriers for smaller, less-resourced startups, thereby concentrating power and potentially stifling innovation in the burgeoning AI sector.
Adding to the complexity, Anthropic itself has faced scrutiny regarding its internal safety commitments. Reports indicate that the company's Responsible Scaling Policy (RSP), which outlines its approach to managing AI risks, has seen shifts in its definitions for higher AI Safety Levels (ASL), particularly ASL-4. While Anthropic asserts its rigor in refining and updating its commitments, such adjustments can fuel concerns about the consistency of self-regulatory measures within the rapidly evolving AI industry.
The situation underscores the intricate balance philanthropic organizations navigate when engaging with for-profit tech companies, even with stated intentions of fostering responsible development. As the AI sector continues its rapid expansion, the interplay between investment, corporate governance, and policy advocacy remains a critical area of public and industry scrutiny.