SUNSHINE COAST, Australia – Google DeepMind's advanced Gemini AI model has officially achieved gold medal-level performance at the International Mathematical Olympiad (IMO), solving five out of six exceptionally difficult problems and securing 35 out of a possible 42 points. This marks a significant milestone as the first time an AI system has received official gold-level grading from the prestigious competition's organizers. The achievement underscores the rapid advancements in AI's reasoning capabilities. The model, specifically an advanced version of Gemini with "Deep Think" enabled, operated entirely in natural language, producing rigorous mathematical proofs directly from the official problem descriptions within the competition’s 4.5-hour time limit. This contrasts with previous AI attempts that often required problems to be translated into formal mathematical languages. The IMO President, Prof. Gregor Dolinar, confirmed the milestone, noting, "Their solutions were astonishing in many respects. IMO graders found them to be clear, precise and most of them easy to follow." Beyond its mathematical prowess, the Gemini model has demonstrated state-of-the-art (SOTA) results in coding and other complex reasoning tasks. This broad capability suggests a future where AI systems can significantly accelerate scientific research, assist in complex problem-solving, and streamline development processes for engineers and mathematicians. The model's success at the IMO highlights its ability to engage in abstract thinking and synthesize insights across multiple domains. An internal account, shared by a developer identified as "NomoreID" on social media, shed light on the intense development leading up to the competition. > "We put all individual recipes (that we figured out before) together and did a yolo run (with the compute that I had to beg various groups to loan) to train our most advanced Gemini model. We finished training 2 days before IMO :D That model achieved SOTA results, not just for math, but coding along with other reasoning tasks, unbelievable!" the developer stated in the tweet. This internal perspective emphasizes the last-minute efforts and the team's confidence in the model's capabilities. The achievement follows Google DeepMind's silver medal performance last year with specialized AI systems. While other AI firms, including OpenAI, have reported similar unofficial results, Gemini's performance was formally certified by the Olympiad coordinators. Google DeepMind plans to make a version of this "Deep Think" model available to trusted testers, including mathematicians, before a broader rollout to Google AI Ultra subscribers, signaling its potential for wider application and further advancements in AI.