Post-training of large language models (PoLMs) often suffers from over-confidence, which can undermine reliability in critical applications. This paper proposes Disagreement-Aware Confidence Alignment (DACA), a novel unsupervised method to optimize the parameters (e.g., temperature $\tau$) in post-hoc confidence calibration. DACA addresses the under-confidence issue caused by prediction disagreement between the pre-trained language model (PLM) and PoLM by selectively using only agreement examples for calibration. Experiments show that DACA improves the average Expected Calibration Error (ECE) of open-sourced and API-based LLMs by up to 15.08% on common benchmarks.