Grok 4 Observed Self-Translating Prompts for Internal Reasoning, Highlighting Advanced Multilingual Processing

Yam Peleg, Co-Founder and CEO of AI21 Labs, a prominent AI company specializing in large language models, recently reported a notable behavior from xAI's Grok 4. Peleg observed the advanced AI model initiating reasoning in Hebrew before independently translating the user's prompt into English to continue its internal processing. This unusual self-correction in language handling offers a rare glimpse into the sophisticated, multi-stage reasoning capabilities of cutting-edge AI.

In a social media post, Peleg detailed the encounter, stating,

"I just saw Grok 4 start reasoning in Hebrew, then translate my prompt to English so it could continue reasoning in English. I’m not sure if that’s good or bad, but it’s the first time I’ve seen this." The observation suggests Grok 4 possesses an internal mechanism to optimize its reasoning pipeline by switching to a preferred or more efficient language for complex thought processes, even when the initial input is in another language.

Grok 4, launched by Elon Musk's xAI in July 2025, is positioned as a leading large language model designed to compete with industry giants like OpenAI's GPT-4o and Google's Gemini. It boasts advanced reasoning abilities, multimodal processing, and real-time search integration, leveraging a massive 200,000-GPU cluster for its training. The model is known for its ability to utilize tools and choose its own search queries, aiming for "maximally truth-seeking AI."

This observed cross-lingual self-translation underscores the evolving complexity of AI models. While large language models are generally trained on vast multilingual datasets, the active decision by Grok 4 to translate its own input for internal reasoning suggests a strategic processing choice rather than a mere language generation capability. Such behavior could indicate an internal "chain of thought" mechanism that operates more efficiently in a specific linguistic format, even if the user interface supports multiple languages.

Yam Peleg's background as a seasoned entrepreneur and leader in AI development, with over two decades of experience in technology and a focus on large language models at AI21 Labs, lends significant weight to his observation. This incident highlights the ongoing advancements in AI's cognitive flexibility and internal operational dynamics, pushing the boundaries of how AI models process and understand information across linguistic barriers.