This paper highlights the potential problem of large-scale language models (LLMs) providing incorrect information when used for decision support, potentially leading to inefficient human decision-making. To address this, we propose a Retrieval Augmented Generation (RAG) method that references external documents to generate responses. However, we highlight the limitations of existing RAG methods, which do not account for user decision calibration. This paper proposes Calibrated Retrieval-Augmented Generation (CalibRAG), a novel retrieval method that improves the calibration of RAG-based decision-making. We demonstrate improvements in calibration and accuracy over existing methods across a variety of datasets.