This paper presents a novel method to align the generated output to arbitrary topics by utilizing Sparse Autoencoders (SAE) applied to the layers of a large-scale language model (LLM). Based on the previous study that SAE neurons correspond to interpretable concepts, we 1) score each SAE neuron according to its semantic similarity to the alignment target text, and 2) modify the output at the SAE layer level by emphasizing neurons that are relevant to the topic. We conduct experiments using various public topic datasets such as Amazon reviews, medicine, and flattery, as well as open-source LLM and SAE combinations such as GPT2 and Gemma. The alignment experiments on medical prompts show advantages such as an improvement in average language acceptance (0.25 vs. 0.5), a reduction in training time for various topics (333.6 seconds vs. 62 seconds), and an acceptable inference time (+0.00092 seconds/token) for many applications compared to fine-tuning. The source code is available at github.com/IBM/sae-steering에서.