While the impact of neural network pruning on model performance is well known, its impact on model interpretability remains unclear. In this study, we investigate how fine-tuning after size-based pruning alters low-level importance maps and high-level concept representations. Using ResNet-18 trained on ImageNette, we compare the posterior interpretations of Vanilla Gradients (VG) and Integrated Gradients (IG) according to pruning levels, evaluating sparsity and fidelity. Furthermore, we apply CRAFT-based concept extraction to track changes in semantic consistency of learned concepts.