Distributed deep neural networks (DNNs) have been shown to reduce the computational burden on mobile devices and reduce end-to-end inference latency in edge computing environments. In this paper, we rigorously analyze the resilience of distributed DNNs against adversarial attacks. We address this issue from an information-theoretic perspective and rigorously demonstrate that (i) compressed latent dimensions improve resilience but impact task-oriented performance, and (ii) deeper segmentation points improve resilience but increase computational burden. These tradeoffs provide a new perspective for designing robust distributed DNNs. Using the ImageNet-1K dataset, we conduct extensive experimental analysis, considering six different DNN architectures, six distributed DNN approaches, and ten adversarial attacks.