Miles Brundage, a prominent AI policy researcher and former Head of Policy Research at OpenAI, recently highlighted a notable behavior in Anthropic's Claude AI. According to a tweet from Brundage, selecting both "Research" and "Extended thinking" options in the Claude interface appears to result in the model primarily engaging in extended thinking, rather than incorporating the research function. This observation points to a potential challenge in how advanced AI models integrate complex, multi-faceted operational modes.
Brundage's insights carry significant weight in the artificial intelligence community, given his extensive background in AI policy and his previous senior role at OpenAI. His work has consistently focused on the responsible development and deployment of AI systems, making his observations on user experience and model behavior particularly relevant. His tweet, stating, > "Selecting both 'Research' and 'Extended thinking' in the Claude interface seems to just result in it doing extended thinking but not Research," underscores a perceived functional limitation.
Anthropic has positioned its Claude models, particularly Claude Opus 4 and Sonnet 4, as leaders in advanced reasoning and complex problem-solving. These models are designed with "Extended thinking" as a core feature, allowing for deeper, more sustained reasoning and the ability to alternate between internal thought processes and tool use, such as web search, to refine responses. This mode is intended for tasks requiring rigorous, multi-step analysis and is visible through "thinking summaries."
The "Research" capability within Claude is described as the model's ability to effectively search through external and internal data sources to synthesize comprehensive insights across complex information landscapes. This often involves leveraging web search tools to gather up-to-date and relevant information. The observed behavior suggests that when a user attempts to combine the deep, iterative reasoning of "Extended thinking" with the external information retrieval of "Research," the former may override or diminish the latter's execution.
This reported prioritization could impact users who rely on Claude for tasks demanding both extensive internal processing and thorough external data gathering. While "Extended thinking" aims to provide detailed, step-by-step problem-solving, the apparent neglect of "Research" when combined raises questions about the model's ability to seamlessly blend these distinct, yet often complementary, advanced functionalities. Anthropic continues to evolve Claude's capabilities, with a focus on improving its performance in complex agentic scenarios and long-horizon tasks.