This paper points out that existing explainable machine learning (XAI) methodologies focus only on explaining the process of mapping inputs and outputs of models, and lack consideration of how they are actually used. Explanations should be designed and evaluated with specific purposes, and we present a method to formalize these purposes through a framework based on statistical decision theory. We demonstrate how this feature-oriented approach can be applied to a variety of use cases, such as clinical decision support, providing remedies, or debugging, and we use it to characterize the maximum performance gain that an ideal decision maker can obtain on a specific task, thereby avoiding misuse due to ambiguity. Researchers should specify specific use cases and analyze them considering the expected usage models of explanations. Finally, we present an evaluation method that integrates theoretical and empirical perspectives on the value of explanations, and a definition that encompasses these perspectives.