In this paper, we propose a component-aware pruning strategy for multicomponent neural network (MCNA) architectures to address the problem of deploying deep neural networks (DNNs) in resource-constrained environments. Existing comprehensive structural pruning frameworks reduce the model size based on parameter dependency analysis, but they may jeopardize the network integrity by removing large groups of parameters when applied to MCNA. The proposed method expands the dependency graph to separate individual components and the flows between components, thereby generating smaller and more goal-oriented pruning groups, which preserves functional integrity. Experimental results on control tasks demonstrate that the proposed method achieves higher sparsity and reduced performance degradation, thereby presenting a new path to efficiently optimize complex multicomponent DNNs.