In this paper, we present a novel method called RDBP to address the problem that existing models in continuous learning tend to favor either plasticity or stability. RDBP combines two complementary mechanisms: ReLUDown, which maintains feature sensitivity while preventing neurons from quiescent, and Decreasing Backpropagation, which gradually protects early layers from abrupt updates. When evaluated on the Continual ImageNet benchmark, RDBP performs equally or better than state-of-the-art methods in both plasticity and stability, while also reducing the computational cost. Therefore, RDBP is a practical solution for real-world continuous learning and provides a clear benchmark for evaluating future continuous learning strategies.