This paper addresses the problem of Continual Learning (CL), a machine learning model that must continuously learn new tasks to adapt to data changes in real-world environments. Specifically, it focuses on addressing the challenge of balancing learning new tasks while maintaining prior knowledge (so-called catastrophic forgetting). We highlight the shortcomings of existing variational methods used in Bayesian CL and propose a novel learning objective that incorporates the regularization effects of multiple prior posterior estimates to prevent individual errors from dominating future posterior updates. The proposed method draws on Temporal-Difference methods from reinforcement learning and neuroscience, and outperforms existing variational CL methods on CL benchmarks.