Inspired by synaptic pruning in the biological brain, we propose a size-based synaptic pruning method that progressively removes low-importance connections during training. It can be applied to various time-series prediction models, including RNNs, LSTMs, and Patch Time Series Transformers, replacing dropout and is directly integrated into the training loop. Weight importance is calculated based on absolute size, and a cubic schedule is used to progressively increase global sparsity. By periodically and permanently removing low-importance weights and maintaining gradient flow for active weights, we eliminate the need for separate pruning and fine-tuning steps.