Concept Bottleneck Models (CBMs) aim to enhance interpretability by structuring predictions around concepts that humans can understand. However, unintended information leakage, where prediction signals bypass concept bottlenecks, hinders transparency. In this paper, we present an information-theoretic measure that quantifies information leakage in CBMs, identifying the extent to which concept embeddings encode additional unintended information beyond the given concepts. We validate the measure through controlled synthetic experiments and demonstrate its effectiveness in detecting leakage trends in a variety of configurations. We highlight that feature and concept dimensionality significantly influence leakage, and that classifier choice influences measurement stability (XGBoost emerges as the most stable estimator). Furthermore, our initial investigations show that the measure exhibits the expected behavior when applied to soft-joint CBMs, suggesting the reliability of leakage quantification beyond fully synthetic environments. While this study rigorously evaluates the measure in controlled synthetic experiments, future work could extend its application to real-world datasets.