This paper analyzes the limitations of the concept-based explainability methodology. The concept-based explainability methodology uses a human-understandable intermediary to explain machine learning models, assuming that concept predictions help us understand the internal reasoning of the model. In this paper, we evaluate the validity of this assumption by analyzing whether concept predictors utilize “relevant” features for their predictions, that is, locality. Concept-based models that do not consider locality have poor explainability because concept predictions are based on features that are not apparent, making interpretation meaningless. In this paper, we present three metrics to evaluate locality and analyze them to complement theoretical results. Each metric captures the perturbation of different concepts and evaluates the impact of perturbation of “irrelevant” features on the concept predictor. We find that many concept-based models used in practice do not adhere to locality because concept predictors cannot always distinguish clearly distinct concepts, and we present proposals to alleviate this problem.