OpenAI-o1 and DeepSeek-R1 demonstrated that test-time scaling can significantly improve model performance on complex tasks such as logical reasoning. This paper proposes Adaptive Rectification Sampling (AR-Sampling), which allows self-correction to correct errors at a finer level of granularity. AR-Sampling utilizes a process-supervised reward model (PRM) that acts as a verifier and trigger sentences to adaptively induce the model to rethink at appropriate stages. Experimental results on GSM8K and MATH500 demonstrate that the proposed approach improves solution accuracy by encouraging the model to rethink at a finer level of granularity, while generating a reasonable number of additional tokens.