Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Comparative Analysis of Lightweight Deep Learning Models for Memory-Constrained Devices

Created by
  • Haebom

Author

Tasnim Shahriar

Outline

This paper presents a comprehensive evaluation of lightweight deep learning models suitable for deployment in resource-constrained environments (e.g., low-memory devices). We benchmark five state-of-the-art architectures—MobileNetV3 Small, ResNet18, SqueezeNet, EfficientNetV2-S, and ShuffleNetV2—on three diverse datasets: CIFAR-10, CIFAR-100, and Tiny ImageNet. We evaluate the models using four key performance metrics: classification accuracy, inference time, floating-point operations (FLOPs), and model size. We compare pretrained models with models trained from scratch to investigate the impact of hyperparameter tuning, data augmentation, and training paradigms. We find that transfer learning significantly improves model accuracy and computational efficiency, especially on complex datasets like Tiny ImageNet. EfficientNetV2 consistently achieves the highest accuracy, MobileNetV3 offers the best balance between accuracy and efficiency, and SqueezeNet excels in inference speed and compactness. This study highlights the critical tradeoff between accuracy and efficiency, providing actionable insights for deploying lightweight models in real-world applications with limited computational resources.

Takeaways, Limitations

Takeaways:
Transfer learning improves the accuracy and efficiency of lightweight models, especially on complex datasets.
EfficientNetV2 provides high accuracy, MobileNetV3 provides a balance between accuracy and efficiency, and SqueezeNet provides fast inference speed and small size.
Provides actionable guidance for deploying lightweight models in resource-constrained environments.
Contributes to optimizing deep learning systems for edge computing and mobile platforms.
Limitations:
The number and type of datasets used for evaluation may be limited.
Exploration of more diverse hyperparameter combinations and learning strategies may be necessary.
Performance evaluation in real application environments may be lacking.
👍