Concept Bottleneck Models (CBMs) are proposed to increase the reliability of AI systems by restricting their decisions to a set of concepts that humans can understand. However, CBMs typically assume that datasets contain accurate concept labels, a practice often violated, leading to significant performance degradation (up to 25% in some cases). In this paper, we propose a novel loss function, the Concept Preference Optimization (CPO) objective, that effectively mitigates the negative impact of concept mislabeling. We analyze the key characteristics of the CPO objective and demonstrate that it directly optimizes the posterior distribution of concepts. Compared to Binary Cross Entropy (BCE), we demonstrate that CPO is inherently less sensitive to concept noise. Experimentally, we demonstrate that CPO consistently outperforms BCE on three real-world datasets, both with and without additional label noise. The code is available on GitHub.