Google's Ironwood TPU Intensifies AI Chip Rivalry with Nvidia, Anthropic Commits to 1 Million Units

Image for Google's Ironwood TPU Intensifies AI Chip Rivalry with Nvidia, Anthropic Commits to 1 Million Units

Google has significantly escalated its challenge to Nvidia's dominance in the artificial intelligence hardware market with the public availability of its seventh-generation Tensor Processing Unit (TPU), codenamed Ironwood. This custom silicon, initially developed in 2013 to address the escalating costs and scaling complexities of AI infrastructure, is now positioned as a direct competitor, particularly with a major commitment from AI research firm Anthropic.

"Google's Secret AI Weapon: Will the TPU Crush Nvidia?" noted Pete Weishaupt in a recent social media post, highlighting the growing industry attention on Google's advancements. Ironwood, unveiled in April and becoming generally available in late 2025, is purpose-built for high-volume, low-latency AI inference and model serving, offering over four times better performance per chip for both training and inference workloads compared to its predecessor, Trillium (TPU v6e). Google's proprietary Gemini 3 model already runs entirely on its TPUs, showcasing the chips' advanced capabilities.

Nvidia, currently holding a commanding lead in the AI chip sector with an estimated 92% market share in the 2024 data center GPU segment, faces increasing pressure. The company reported $130.5 billion in revenue for fiscal year 2025 and has secured over $500 billion in orders for its Blackwell and upcoming Rubin GPUs. However, Google's strategic move to offer its TPUs through Cloud services and directly to major players like Meta marks a shift from its historical internal-use model.

A significant development underlining this intensifying competition is Anthropic's plan to access up to 1 million Google TPUs to power its Claude models. This massive commitment underscores the growing viability and performance of Google's custom silicon for large-scale AI operations. Google's TPUs are integral to its AI Hypercomputer system, designed to optimize performance and cost for diverse AI workloads, including large language models and recommendation systems.

The rivalry is not solely about technical specifications but also strategic alliances and market positioning. Nvidia has historically countered competitive threats by investing in AI startups to secure their use of Nvidia's hardware. As Google makes its advanced TPUs more accessible, the AI hardware landscape is poised for a dynamic period of innovation and competition, potentially reshaping the industry's reliance on a single dominant provider.