We evaluate the impact of quantization on the performance of the Vision-Language Model (VLM) CLIP at scale. By comprehensively evaluating reliability metrics as well as accuracy, we find counterintuitive results depending on the pretraining source. Quantization consistently improves the calibration of underconfident pretrained models and tends to degrade the calibration of overconfident variants. We demonstrate that out-of-distribution (OOD) detection can be improved despite the degraded calibration, and that specific quantization-aware training (QAT) methods provide simultaneous benefits in accuracy, calibration, and OOD robustness.