This study explores the use of pre-trained sparse autoencoder (SAE) features to control the generated language of a large-scale multilingual language model (LLM). Specifically, we used SAE features applied to the residual streams of Gemma-2B and Gemma-9B models in a zero-shot environment without explicit language prompts or fine-tuning to identify features exhibiting activation differences between English, Chinese, Japanese, Spanish, and French. Using a single SAE feature manipulation, we achieved language switching with a success rate of up to 90% (based on the FastText language classification criterion) while maintaining semantic fidelity via LaBSE similarity. Our analysis reveals that language steering is most effective in the mid-to-late transformer layers, amplified by specific attention heads associated with language-sensitive SAE features.