In this paper, we present an improved structured pruning technique to address the high model complexity and computational demands of deep neural networks (DNNs). We point out that existing importance metrics struggle to maintain application-specific performance characteristics, and propose an improved importance metric framework that explicitly considers application-specific performance constraints. We employ multiple strategies to determine the optimal pruning size for each group, thereby maintaining the tradeoff between compression and task performance, and evaluate the proposed method using an autoencoder for MNIST image reconstruction. Experimental results demonstrate that the proposed method effectively preserves task-related performance while maintaining model usability by satisfying required application-specific criteria even after significant pruning.