We study the impact of pruning to maintain neural network performance on model interpretability. We investigate the impact of fine-tuning after size-based pruning on low-level importance maps and high-level concept representations. Using ResNet-18 trained on the ImageNette dataset, we compare importance maps at different pruning levels using Vanilla Gradients (VG) and Integrated Gradients (IG), assessing sparsity and fidelity. We track changes in semantic consistency of learned concepts using CRAFT-based concept extraction. Light pruning improves the focus and fidelity of the importance map and preserves semantically meaningful concepts. Heavy pruning reduces importance map sparsity and concept coherence while maintaining accuracy.