In this paper, we study the scaling effect on the inference performance of large-scale language models (LLMs). We present a synthetic multi-stage inference environment that mimics the structure and distribution of real-world large-scale knowledge graphs, and evaluate the inference performance of LLMs by predicting missing edges in incomplete graphs. The experimental results reveal a U-shaped loss curve, which shows that excessive parameters can hinder the inference performance due to excessive memorization. We investigate the effects of various factors such as graph structure, model size, and training steps on this curve, and propose an empirical scaling that linearly maps knowledge graph search entropy to the optimal model size to predict the optimal model size for a given knowledge graph. In conclusion, this study provides new insights into the relationship between scaling and inference in LLMs, and suggests a method to optimize the performance on inference tasks.