This paper presents a comprehensive evaluation of lightweight deep learning models suitable for deployment in resource-constrained environments (e.g., low-memory devices). We benchmark five state-of-the-art architectures—MobileNetV3 Small, ResNet18, SqueezeNet, EfficientNetV2-S, and ShuffleNetV2—on three diverse datasets: CIFAR-10, CIFAR-100, and Tiny ImageNet. We evaluate the models using four key performance metrics: classification accuracy, inference time, floating-point operations (FLOPs), and model size. We compare pretrained models with models trained from scratch to investigate the impact of hyperparameter tuning, data augmentation, and training paradigms. We find that transfer learning significantly improves model accuracy and computational efficiency, especially on complex datasets like Tiny ImageNet. EfficientNetV2 consistently achieves the highest accuracy, MobileNetV3 offers the best balance between accuracy and efficiency, and SqueezeNet excels in inference speed and compactness. This study highlights the critical tradeoff between accuracy and efficiency, providing actionable insights for deploying lightweight models in real-world applications with limited computational resources.