OFF-FSP is an offline reinforcement learning algorithm that enables policy improvement using only a fixed dataset, developed specifically for competitive game environments. This algorithm simulates various opponents through virtual interactions in situations where the game structure is unknown and utilizes an offline self-play learning framework. Furthermore, to overcome incomplete data coverage, it approximates a Nash equilibrium by combining single-agent offline reinforcement learning with fictional self-play. Experiments on matrix games, poker, board games, and real-world human-robot competition tasks demonstrate that OFF-FSP outperforms existing methods.