Deep neural networks (DNNs) excel at a variety of tasks, but they require unnecessarily large model sizes, high computational requirements, and significant memory footprints. ODNs address these challenges by providing a balance between model depth and task complexity. Specifically, they propose a training strategy similar to Neural Architecture Search (NAS), called "progressive depth scaling." This strategy starts at a shallow depth and gradually increases the depth as previous blocks converge, repeating this process until the target accuracy is reached. ODNs eliminate redundant layers by only using the optimal depth for a given dataset. This reduces future training and inference costs, reduces memory footprint, improves computational efficiency, and facilitates deployment on edge devices. The optimal depths of ResNet-18 and ResNet-34 on MNIST and SVHN reduce memory footprints by up to 98.64% and 96.44%, respectively, while maintaining competitive accuracies of 99.31% and 96.08%, respectively.