This paper presents a method to improve the performance of quantized models by increasing their size through post-training optimization. While existing quantization techniques focus on reducing model size, this paper proposes a strategy to expand the model to compensate for the performance degradation caused by the quantization process. Specifically, by quantizing the Llama3 1B model to 4 bits and increasing the model size by 5%, we achieve an average 9% improvement in perplexity reduction compared to QuaRot and SpinQuant, and a 3.8% size reduction compared to the BF16 baseline model. These results demonstrate that post-training model expansion is a viable strategy for improving model performance within the quantization co-design space.