As large-scale language models (LLMs) are widely used for decision support, they can provide inaccurate information, potentially leading to incorrect human decisions. To address this, Retrieval Augmented Generation (RAG) techniques, which reference external documents, have been proposed. However, existing RAG methods do not focus on ensuring proper calibration of human decision-making. In this paper, we propose Calibrated Retrieval-Augmented Generation (CalibRAG), a novel retrieval method that ensures the proper calibration of RAG-based decisions. CalibRAG is experimentally validated to improve calibration performance and accuracy over other baselines on various datasets.