This paper focuses on mechanistic interpretability research, which involves reverse engineering a model to explain its behavior. Unlike previous studies that have focused on the static mechanisms of specific behaviors, this study explores the learning dynamics within the model. Inspired by the concept of intrinsic dimensionality, we view the model as a computational graph with redundancy for a specific task, and consider the fine-tuning process as a process of searching and optimizing subgraphs within this graph. Based on this hypothesis, we propose circuit fine-tuning, an algorithm that iteratively builds subgraphs for a specific task and heuristically updates their parameters. We validate this hypothesis through carefully designed experiments and provide a detailed analysis of the learning dynamics during fine-tuning. Experiments on more complex tasks demonstrate that circuit fine-tuning can balance target task performance with general functionality. This study presents a novel analytical approach to the dynamics of fine-tuning, provides new insights into the mechanisms of the training process, and inspires the design of superior algorithms for neural network training.