This paper proposes Rank-One Safety Injection (ROSI), a novel method for improving the safety of large-scale language models (LLMs). ROSI is a simple, rank-one weight modification method that permanently steers model activations into the rejection parameter subspace, without requiring fine-tuning. It computes the required safety directions from a small set of pairs of harmful and harmless directives and applies them to all residual stream write matrices. Evaluation on Llama Guard 3 shows that ROSI consistently improves the safety rejection rate while maintaining the model's utility. Furthermore, we demonstrate that it can amplify and reorder potential safety directions in "uncensored" models, demonstrating its utility as an effective last-step safety procedure. Consequently, goal-directed and interpretable weight steering is an inexpensive and powerful mechanism for improving LLM safety, complementing more resource-intensive fine-tuning paradigms.