This paper explores post-training quantization (PTQ), a practical compression method for addressing the size issues that arise during the deployment of large-scale language models (LLMs). We note that previous studies have failed to provide a comprehensive understanding of the impact of PTQ and the scaling laws of quantized models. We experimentally explored hierarchical scaling laws across various tasks. We decompose knowledge in LLMs into memorization and exploitation skills and develop an integrated quantitative framework encompassing model size, effective bit width, calibration set size, and group size. Our results reveal that knowledge memorization is significantly more sensitive to changes in effective bit width, calibration set size, and model size than knowledge exploitation. These findings provide a granular understanding of the impact of PTQ and offer guidance for developing knowledge-aware quantization strategies that better preserve targeted cognitive functions.