This paper presents the results of a study that demonstrates that Google’s Gemini 2.5 Pro, a large language model (LLM), can solve five out of six problems of the 2025 International Mathematics Olympiad (IMO). IMO problems are unique and difficult problems that require deep insight, creativity, and formal reasoning, and are known to be difficult for existing LLMs to solve. In this study, we use the latest IMO problems to avoid data contamination, and achieve high accuracy through careful prompt design and a self-validation pipeline. This highlights the importance of developing optimal strategies to fully utilize the potential of powerful LLMs for complex inference tasks.