This paper presents CURE, a novel lightweight framework for addressing concept-based spurious correlations that compromise the robustness and fairness of pre-trained language models. CURE extracts concept-irrelevant representations through a dedicated content extractor and reversal network, minimizing the loss of task-relevant information. A controllable debiasing module then fine-tunes the influence of residual conceptual cues using contrastive learning, allowing the model to either reduce harmful biases or leverage beneficial correlations appropriate for the target task. Evaluated on the IMDB and Yelp datasets using three pre-trained architectures, CURE improved the F1 score by 10 points on IMDB and 2 points on Yelp, while minimizing computational overhead. This study presents a flexible, unsupervised learning-based design for addressing conceptual bias, paving the way for more reliable and fair language understanding systems.