In this paper, we present a novel method to control the inference process of a large-scale language model (LLM) with thinking capabilities. Through experiments on 500 tasks across 10 different categories using the DeepSeek-R1-Distill model, we identify several inference behaviors, including uncertainty representation, example generation for hypothesis testing, and backtracking during the inference process. We show that these behaviors are mediated linearly in the activation space of the model and can be controlled using steering vectors. This study provides a method to extract and apply these vectors to modulate specific aspects of the model’s inference process, such as backtracking tendency or uncertainty representation. We verify the consistency of our control method using three DeepSeek-R1-Distill models.