This paper presents the Effective Model Capacity (CLEMC) for neural networks in Continuous Learning (CL) related to the stability-plasticity dilemma. We develop a differential equation that models the evolution of the interaction between the neural network, task data, and the optimization procedure, and show that the effective capacity, i.e., the stability-plasticity trade-off, is inherently non-stationary. Through extensive experiments across various architectures (including feedforward networks, convolutional neural networks, graph neural networks, and large-scale Transformer-based language models with millions of parameters), we demonstrate that the network's ability to represent new tasks diminishes when the new task distribution differs from the previous task distribution.