This paper points out that feature-based methods commonly used to explain model predictions in high-dimensional data implicitly assume the availability of interpretable features. However, in high-dimensional data, it is often difficult even for experts to mathematically specify important features. To address this problem, this paper presents FIX (Features Interpretable to eXperts), a benchmark that measures how well features align with expert knowledge. In collaboration with experts from diverse fields such as cosmology, psychology, and medicine, we propose FIXScore, a unified expert alignment measure applicable to a variety of real-world settings, including vision, language, and time-series data modalities. Using FIXScore, we find that popular feature-based explanation methods lack alignment with expert-specified knowledge, highlighting the need for new methods to better identify features that are interpretable to experts.