This paper presents CURE, a novel lightweight framework for addressing concept-based spurious correlations that compromise the robustness and fairness of pre-trained language models. CURE extracts concept-irrelevant representations through a content extractor powered by a reversal network, minimizing the loss of task-relevant information. A controllable debiasing module then fine-tunes the influence of residual conceptual cues using contrastive learning, allowing the model to either reduce detrimental biases or leverage beneficial correlations appropriate for the target task. Evaluated on the IMDB and Yelp datasets using three pre-trained architectures, CURE achieves absolute performance gains of +10 points in F1 scores on IMDB and +2 points on Yelp, while minimizing computational overhead. This study presents a flexible, unsupervised learning-based blueprint for addressing conceptual bias, paving the way for more reliable and fair language understanding systems.