In this paper, we present an explainable AI (XAI) method for how each neuron contributes to the output of neural networks with billions of parameters, such as large language models (LLMs) and generative adversarial networks (GANs). Existing XAI methods assign importance to inputs, but are unable to quantify the contribution of neurons across thousands of output pixels, tokens, or logits. We address this problem by presenting multi-perturbation Shapley value analysis (MSA), a model-agnostic game-theoretic framework. MSA systematically removes combinations of neurons to produce Shapley modes, which are per-unit contribution maps with the same dimension as the model output. We apply MSA to models of various sizes, from multilayer perceptrons to Mixtral-8x7B with 56 billion parameters and GANs, demonstrating how regularization concentrates computation on a small number of hubs, language-specific experts in LLMs, and inverted pixel-generating hierarchies in GANs. These results demonstrate that MSA is a powerful approach for interpreting, compiling, and compressing deep neural networks.