This paper addresses the 'stability gap', a phenomenon in continuous learning where the performance of a previous task temporarily deteriorates when learning a new task. This phenomenon occurs even in ideal joint-loss environments, and shows the vulnerability of algorithms that mitigate the forgetting of previous learning. We argue that this gap reflects an imbalance between rapid adaptation and robust maintenance, and propose a novel mechanism, called uncertainty-modulated gain dynamics, inspired by the multi-timescale dynamics of biological brains. This mechanism approximates two timescale optimizers and provides a dynamic balance between knowledge integration and interference from prior information. Experimental results on MNIST and CIFAR benchmarks show that the proposed mechanism effectively mitigates the stability gap. Finally, we analyze how gain modulation reproduces noradrenergic function in cortical circuits, providing insight into the mechanism that reduces the stability gap and improves performance on continuous learning tasks.