This paper proposes a novel rotation matrix generation method based on post-trained quantization (PTQ) to address the deployment challenges of large-scale language models (LLMs), which require expensive computational resources. To address the performance degradation of existing rotation-based methods at very low bit widths, such as 2 bits, we present a novel approach that reduces quantization errors by clustering similar frequency components using the Walsh-Hadamard transform and sequence alignment. Specifically, we demonstrate the Grouped Sequence Alignment Rotation (GSR) technique, which utilizes a block diagonal matrix with small Walsh blocks, effectively isolating the influence of outliers and achieving performance comparable to learning-based optimization methods. We validate the performance of the proposed method through inference tasks and perplexity (PPL) score evaluations on the WikiText-2 dataset, demonstrating its performance improvement over existing learned rotation techniques.