A user identified as "rohit" has recently highlighted a significant operational concern regarding OpenAI's advanced gpt-5.1-codex-max model. In a public tweet directed at OpenAI and tagging "sandersted," the user reported, > "Hey @OpenAI the gpt-5.1-codex-max keeps getting stuck in any mode other than xhigh (i.e., in medium or high)." This suggests potential performance issues when the model operates outside its most demanding setting.
The gpt-5.1-codex-max model, introduced as an upgrade to gpt-5.1-codex, is designed for complex agentic coding tasks. OpenAI has promoted its capabilities, including "Extra High" (xhigh) reasoning, efficient compaction for managing large context windows, and claims of autonomous operation for over 24 hours. It boasts improved token efficiency, reportedly using 30% fewer thinking tokens and running 27% to 42% faster than its predecessor in real-world coding scenarios.
Despite its enhanced top-tier performance, the user's report points to inconsistencies in its "medium" and "high" reasoning modes. This aligns with broader discussions within the OpenAI developer community, where users have reported regressions and models getting "stuck" or becoming "slower" in various gpt-5 and gpt-5.1 Codex iterations. Such instability in lower performance settings could hinder developers who rely on these modes to balance cost and speed.
OpenAI has not yet released an official statement addressing this specific issue with gpt-5.1-codex-max's performance modes. However, the company is known for its continuous efforts to refine its models and integrate user feedback. The reliability of AI coding assistants across different operational tiers remains crucial for their widespread adoption and seamless integration into development workflows.