This paper presents the results of solving the 2025 International Mathematics Olympiad (IMO) problem using Google’s Gemini 2.5 Pro. Considering that existing large-scale language models (LLMs) perform well on mathematical benchmarks but struggle with IMO-level problems, we achieve the correct answer for 5 out of 6 problems through careful prompt design and a self-verification pipeline while avoiding data contamination. This highlights the importance of developing optimal strategies to fully leverage the potential of powerful LLMs for complex inference tasks.