This paper proposes NOVER (No-Verifier Reinforcement Learning), a novel framework for reinforcement learning without external verifiers. Conventional incentive learning approaches rely on external verifiers, limiting their applicability in domains like mathematics and coding where verifiers are not readily available. However, NOVER enables incentive learning using only standard supervised learning fine-tuning data. Applicable to a variety of text-to-text tasks, NOVER outperforms similarly sized models distilled from large-scale inference models like DeepSeek R1 671B by 7.7%. Furthermore, it presents new possibilities for large-scale language model optimization, such as inverse incentive learning.