This paper presents research on whether large-scale language models can improve their performance by generating questions and answers independently, without external data. To achieve this, we propose an asymmetric self-learning framework called the Self-Questioning Language Model (SQLM). SQLM consists of a proposer that generates questions and a solver that generates answers, trained through reinforcement learning. The proposer aims to generate problems of appropriate difficulty, and the solver is judged correct through majority voting. For coding problems, the proposer generates unit tests, which the solver verifies. We conducted experiments on three benchmarks: three-digit multiplication, algebraic problems from the OMEGA benchmark, and programming problems from Codeforces, demonstrating performance improvements without external data.