This paper investigates numerical instability arising during the training of FastSurfer, a deep learning (DL)-based brain imaging analysis pipeline. We analyze the variability of FastSurfer's training process using controlled perturbations using floating-point perturbations and random seeds, demonstrating that DL is more susceptible to instability than conventional neuroimaging pipelines. However, the ensemble generated through perturbations performs similarly to the baseline model without perturbations, demonstrating that this variability can be leveraged for subsequent applications such as brain age regression analysis. Our conclusion suggests that training-time variability is not only a reproducibility issue but can also be leveraged as a resource to enhance robustness and enable new applications.