To reduce the cost of applying large-scale language models to downstream tasks, this paper proposes a method that directly adjusts the output distribution during the decoding process, rather than updating the weights. We introduce Steering Vector Decoding (SVD), a lightweight PEFT-compatible method. After an initial warm-up fine-tuning, we extract task-specific steering vectors via the KL divergence gradient. These vectors are then used during the decoding process to approximate the model's output distribution to the task distribution. SVD is equivalent to a first-order approximation of full fine-tuning and provides a global optimum solution for steering vector strengths. Across various tasks and benchmarks, SVD, combined with existing PEFT methods, improves multiple-choice accuracy by up to 5 points, open-ended truthfulness by 2 points, and commonsense datasets by 1-2 points.