This paper considers a realistic environment where GPU time constraints outweigh memory constraints in continuous learning (CL). Unlike previous CL research, we explore a "middle ground" where memory is sufficient to mitigate forgetting, but full retraining from scratch is costly. We find that in this environment, models are biased towards previous tasks and struggle to learn new ones, suggesting that plasticity, rather than stability, is a key challenge. Accordingly, we propose Weight Space Consolidation, which combines plasticity recovery via rank-based parameter resets and stability enhancement via weight averaging. It demonstrates superior performance to strong baseline models in continuous directed tuning of image classifiers and large-scale language models while achieving comparable low-computational costs to retraining, offering a scalable alternative.