This paper highlights the limitations of large-scale language models (LLMs) in graph-related tasks and proposes a novel approach, G1, to improve this. G1 significantly improves the graph inference capabilities of LLMs by applying reinforcement learning (RL) to synthetic graph-theoretic tasks. To achieve this, we constructed Erdős, a large-scale graph inference dataset consisting of 50 graph-theoretic tasks of varying difficulty, 100,000 training data sets, and 5,000 test data sets. G1 demonstrates that a 3B model trained via RL outperforms the Qwen2.5-72B-Instruct model and exhibits superior zero-shot generalization to new tasks, domains, and graph encoding schemes. This study suggests that fine-tuning LLMs using RL on synthetic data is an efficient, scalable, and robust approach for building graph inference models. The source code and dataset are publicly available.