
Anthropic's newly released Claude Opus 4.5 is reportedly exhibiting superior token efficiency compared to OpenAI's GPT-5.1, particularly for demanding reasoning tasks. A recent social media post by "Lisan al Gaib" highlighted that Opus 4.5 achieves comparable or better results with significantly fewer tokens, translating to substantial cost savings. This development underscores the intensifying competition in the large language model (LLM) market, where efficiency and cost-effectiveness are becoming critical differentiators.
Claude Opus 4.5, launched on November 24, 2025, is positioned by Anthropic as its most capable model, excelling in coding, agentic workflows, and computer use. The company announced a revised pricing structure of $5 per million input tokens and $25 per million output tokens, alongside claims of cutting token usage by up to 65% while maintaining or improving performance. Early testers have noted its ability to handle complex problems with fewer iterations and greater first-pass correctness.
OpenAI's GPT-5.1, including its "Thinking" variant, was also released around November 2025, with API pricing set at $1.25 per million input tokens and $10 per million output tokens. While GPT-5.1 features adaptive reasoning to optimize token use, reports indicate that "thinking tokens" for complex tasks can accumulate rapidly, potentially increasing overall costs. OpenAI has emphasized GPT-5.1's improvements in conversational fluidity and flexible adaptation across various modalities.
The tweet directly compared the models, stating, "> the reasoning efficiency of Opus 4.5 is honestly off the charts, it's at least twice as efficient as GPT-5.1 High." The author further elaborated on specific token usage, noting that "GPT-5.1 High Thinking spends like 30-60k tokens," while "Claude 4.5 Opus beats it with less than 16k." For a specific task, the tweet claimed GPT-5.1 used 117,000 tokens compared to Opus 4.5's 51,600, indicating a more than twofold reduction in token consumption.
This efficiency gain for Claude Opus 4.5, despite its higher per-token price, suggests a lower "cost per resolved task" for complex operations. The competitive landscape among frontier models is increasingly shifting from raw capability to practical efficiency, with developers and enterprises seeking models that deliver reliable results without incurring prohibitive operational expenses. The focus on token efficiency and cost optimization is expected to drive further innovation in the rapidly evolving AI industry.