In this paper, we propose SPIRAL, a self-competitive learning framework that improves the reasoning ability of language models without human intervention. SPIRAL is a method in which a model learns by competing with itself through multi-round zero-sum games. This eliminates the need for humans to directly provide problem-answer pairs or design reward functions. For large-scale self-competitive learning, we propose an online multi-round multi-agent reinforcement learning system and a role-conditional advantage estimation (RAE) technique. Experimental results show that the Qwen3-4B-Base model trained only with Kuhn Poker games improves the performance of mathematical reasoning and general reasoning by 8.6% and 8.4%, respectively, and outperforms SFT using 25,000 expert game records. We analyze that this transfer learning is achieved through three cognitive patterns: systematic decomposition, expectation calculation, and case-by-case analysis. Training using various games (tic-tac-toe, Kuhn Poker, simple negotiation) further improves the performance by combining the strengths of each game. Even when SPIRAL is applied to a strong inference model (DeepSeek-R1-Distill-Qwen-7B), it shows an average performance improvement of 2.0%. In conclusion, we show that self-competitive learning through zero-sum games is a promising way to develop transferable inference abilities.