This paper investigates whether pre-trained language models can enhance their inference capabilities by generating questions and answers on their own without external data. To achieve this, we propose a method that provides only a single prompt, specifying a topic (e.g., an algebraic problem) and allowing the model to generate questions on its own. We present Self-Questioning Language Models (SQLM), an asymmetric self-learning framework consisting of a proposer (for generating questions) and a solver (for generating answers), both trained using reinforcement learning. The proposer is rewarded for generating problems of appropriate difficulty, while the solver is rewarded based on majority votes (or approximations if no correct answer is found). For coding problems, the proposer generates unit tests and uses them for validation. We demonstrate this framework on three benchmarks: three-digit multiplication, algebra problems from the OMEGA benchmark, and programming problems from Codeforces, demonstrating that the framework can improve language model performance without an external training dataset.