This paper argues that existing local posterior explanation algorithms provide insight into the behavior of complex machine learning models, but argues that theoretical guarantees only hold for simple decision functions and are uncertain for complex models. This paper presents a general learning theory-based framework for understanding what it means for an explanation to be informative about a decision function. We define an explanation as informative if it contributes to reducing the complexity of the space of plausible decision functions. Using this approach, we demonstrate that many popular explanation algorithms are uninformative when applied to complex decision functions, refuting the notion that they can explain all models from a rigorous mathematical perspective. We derive conditions for various explanation algorithms to be informative, and these conditions are much stronger than expected. For example, gradient and counterexample explanations are uninformative in differentiable function spaces, and SHAP and anchor explanations are uninformative in decision tree spaces. Based on these results, we discuss ways to modify explanation algorithms to make them informative. While the presented analysis of explanation algorithms is mathematical, we argue that it has powerful implications for practical applications, particularly in AI auditing, regulation, and high-risk applications.