This paper presents the results of solving the 2025 International Mathematics Olympiad (IMO) problem using Google Gemini 2.5 Pro, a large-scale language model (LLM). The IMO problem is a unique problem that requires deep insight, creativity, and formal reasoning, and traditional LLMs are known to struggle with it. To avoid data contamination, we use a new IMO problem, and through pipeline design and prompt engineering, we achieve the correct answer for five out of six problems (although one problem requires discussion). This suggests that finding the optimal way to use a powerful model is important.