Retrieval-augmented generation (RAG) enhances the capabilities of LLM by incorporating external knowledge into its input prompts. However, when the retrieved context conflicts with the parametric knowledge of the LLM, it often fails to resolve the conflict between incorrect external context and correct parametric knowledge. To address this issue, we propose Conflict-Aware REtrieval-Augmented Generation (CARE), which consists of a context evaluator and a base LLM. The context evaluator encodes compressed memory token embeddings from raw context tokens. Through grounded/adversarial soft prompting, the context evaluator is trained to distinguish unreliable context and capture guidance signals that guide inference toward more reliable knowledge sources. Extensive experimental results demonstrate that CARE effectively mitigates context-memory conflicts, achieving an average performance improvement of 5.0% on QA and fact-checking benchmarks, suggesting a promising direction for reliable and adaptable RAG systems.