This paper introduces Yet Another Quantization Algorithm (YAQA), an adaptive rounding algorithm that directly considers the final output error of a quantized model. Unlike existing quantization algorithms that minimize activation errors at each layer, YAQA directly considers the network's final output error to reduce end-to-end error. Our results demonstrate that YAQA achieves approximately 30% lower error than existing methods like GPTQ/LDLQ and outperforms quantization-aware training.