This paper presents a novel approach to generalize to unseen graph tasks without task-specific guidance, overcoming the limitations of GNNs' fixed label space and the lack of structural inductive bias in LLMs. Leveraging Large Reasoning Models (LRMs), we reframe graph tasks such as node classification, link prediction, and graph classification as text inference problems. To achieve this, we present a novel dataset containing detailed inference traces for each task and develop Graph-R1, a reinforcement learning framework that guides inference on linearized graphs using task-specific reconsideration templates. Experimental results demonstrate that Graph-R1 generates interpretable and effective predictions that outperform state-of-the-art baseline models in zero-shot settings. This study highlights the potential of graph learning through explicit inference and provides new material for future research.