This paper focuses on the fact that explainable systems provide interacting agents with information about why certain observed effects occur. While this is a positive information flow, we argue that it must be balanced against negative information flows, such as privacy violations. Since both explainability and privacy require inferences about knowledge, we address this issue using epistemic temporal logic, which incorporates quantification of counterfactual causes. This allows us to specify that a multi-agent system provides sufficient information for agents to acquire knowledge about why certain effects occurred. This paper uses this principle to specify explainability as a system-level requirement and presents an algorithm for verifying finite-state models against this specification. We present a prototype implementation of the algorithm and evaluate it on several benchmarks, demonstrating how the proposed approach can be used to distinguish between explainable and inexplicable systems and to establish additional privacy requirements.