As advances in image generation technology have made counterfeit image detection increasingly important, we address the issue of mismatches between the counterfeit and semantic concept spaces. While the semantic concepts learned by pre-trained models are crucial for identifying fake images, mismatches between the counterfeit and semantic concept spaces hinder detection performance. To address this issue, we propose a Semantic Discrepancy-aware Detector (SDD), which utilizes reconstruction learning to align the two spaces at a fine-grained visual level. Leveraging conceptual knowledge embedded in a pre-trained vision language model, SDD designs a semantic token sampling module that mitigates spatial shifts caused by features unrelated to both counterfeit traces and semantic concepts. Furthermore, a concept-level counterfeit mismatch learning module based on a visual reconstruction paradigm enhances the interaction between visual semantic concepts and counterfeit traces, effectively capturing mismatches guided by concepts. Furthermore, by incorporating concept-level counterfeit mismatches learned through low-level counterfeit feature enhancement, we minimize unnecessary counterfeit information. Experimental results on two standard image counterfeit datasets demonstrate that SDD outperforms existing methods.