This paper studies a method to deterministically control the generation language of a large-scale multilingual language model (LLM) in the zero-shot setting. We investigate whether the generation language of an LLM can be steered during inference by leveraging sparse autoencoder (SAE) features, which are known to be correlated with interpretable model behavior in previous studies. We utilize pre-trained SAEs from the residual streams of Gemma-2B and Gemma-9B to identify features whose activations differ most significantly between four target languages: English, Chinese, Japanese, Spanish, and French. By modifying only one SAE feature in a single transformer layer, we achieve controlled language switching with a success rate of up to 90% according to FastText language classification, while maintaining semantic fidelity as measured by LaBSE similarity. Our analysis shows that language steering is most effective in mid- to late-transformer layers, and is amplified by specific attention heads that are disproportionately associated with language-sensitive SAE features. These results demonstrate the potential of sparse feature steering as a lightweight and interpretable mechanism for controlled multilingual generation.