This paper presents ICQuant, an efficient, low-bit post-training quantization (PTQ) technique to address the high memory overhead of large-scale language models (LLMs). To overcome the limitations of existing outlier suppression techniques, which either fail to effectively reduce the quantization range or incur large bit overhead, ICQuant adopts an efficient index coding scheme that leverages outlier statistics. It reduces the quantization range with significantly less bit overhead (approximately 0.3 bits) than existing techniques and can be applied additionally to existing quantization techniques to enhance performance. Experimental results show that ICQuant improves the zero-shot accuracy of the Llama3-70B model by up to 130% to 150% compared to existing techniques (QTIP, QuIP#) with only 2.3 bits/weight, achieving performance comparable to that of a fine-tuned, top-performing quantization technique (PV-tuning) without any fine-tuning.