This paper discusses the development of self-play fine-tuning (SPIN), which transforms weak LLMs into robust models, and highlights its challenges in text-to-SQL tasks. SPIN fails to generate new information, and the large number of correct SQL queries generated by the adversary model degrades the master model's ability to generate accurate SQL queries. To address these issues, this paper proposes SPFT-SQL, a novel self-play fine-tuning method specifically designed for text-to-SQL tasks. SPFT-SQL introduces a validation-based iterative fine-tuning approach that iteratively synthesizes high-quality fine-tuning data based on database schema and validation feedback before self-play. This approach improves model performance and builds a model base with diverse capabilities. In the self-play fine-tuning phase, an error-based loss method is proposed that encourages the adversary model's incorrect output, enabling the master model to distinguish between correct and incorrect SQL generated by the adversary model. Through extensive experiments and in-depth analysis on six open-source LLMs and five widely used benchmarks, we demonstrate that the proposed method outperforms existing state-of-the-art (SOTA) methods.