In this paper, we present a novel method for controlling the inference process of a large-scale language model (LLM) with thinking capabilities. Through experiments on 500 tasks across 10 different categories using the DeepSeek-R1-Distill model, we demonstrate several inference behaviors, including uncertainty representation, example generation for hypothesis testing, and backtracking during the inference process. We show that these behaviors are mediated by linear directions in the activation space of the model and can be controlled using steering vectors. This study provides a method for controlling specific aspects of the inference process (e.g., backtracking tendency or uncertainty representation), and shows consistent control performance across three DeepSeek-R1-Distill models. This provides a practical tool for controlling the inference process of thinking models in a controllable and interpretable manner.