This paper introduces BASE-Q, a proposed method to enhance the effectiveness of rotation techniques in the quantization pipeline of large-scale language models (LLMs). Existing rotation-based quantization methods suffer from channel-mean misalignment and increased rounding and clipping errors due to Gaussian activation distributions. BASE-Q effectively reduces these errors by combining bias correction and asymmetric scaling. Furthermore, it eliminates memory-intensive full-model backpropagation through block-wise optimization. Experimental results on various LLMs and benchmarks demonstrate that BASE-Q reduces accuracy losses by 50.5%, 42.9%, and 29.2%, respectively, compared to existing methods (QuaRot, SpinQuant, and OSTQuant).