This paper presents a novel approach for bias mitigation in large-scale language models (LLMs), applying steering vectors to adjust model activations during forward propagation. The researchers computed eight steering vectors, each corresponding to different social bias axes such as age, gender, and race, on a training subset of the BBQ dataset, and compared their effectiveness with three additional bias mitigation methods on four datasets. On the BBQ dataset, the optimized individual steering vectors achieved an average improvement of 12.8% on BBQ, 8.3% on CLEAR-Bias, and 1% on StereoSet, outperforming prompting and Self-Debias in all cases and outperforming fine-tuning in 12 of 17 evaluations. Furthermore, the steering vectors had the least impact on MMLU scores among the four tested bias mitigation methods. This study presents the first systematic investigation of steering vectors for bias mitigation, shows that steering vectors are a computationally efficient and robust strategy, and provides broad implications for improving AI safety.