In this paper, we propose a novel framework for large-scale language model (LLM) behavior steering in multi-attribute settings, called Multi-Attribute Targeted Steering (MAT-Steer), where multiple attributes (e.g., helpfulness and toxicity) must be controlled simultaneously. MAT-Steer uses the inference-time intervention (ITI) technique to adjust the internal representation of the model by interfering with token representations, and reduces conflicts between attributes by enhancing sparsity and orthogonality between vectors for different attributes. Experimental results on question answering (QA) and generation tasks demonstrate that MAT-Steer outperforms conventional ITI and parameter-efficient fine-tuning methods, e.g., it improves accuracy by an average of 3% on the QA task and achieves a 55.82% win rate over the best ITI baseline model.