To address the safety risks posed by fine-tuning large-scale language models (LLMs), this paper proposes Fine-Grained Safety Neurons (FGSN) with Training-Free Continual Projection, a method that reduces fine-tuning safety risks. Unlike existing safety defense strategies that focus solely on the safety layer, FGSN considers the interactions between the safety layer and individual neurons, implementing a more precise and efficient safety mechanism. FGSN projects the parameters of safety neurons toward the safety direction, thereby improving the safety of the model while better aligning them with human preferences. Extensive experiments on several fine-tuned LLM models demonstrate that our method significantly reduces harmfulness scores and attack success rates while minimizing parameter modifications, while maintaining model usability. Furthermore, by introducing a task-specific, multidimensional heterogeneous safety neuron cluster optimization mechanism, we achieve continuous defense and generalization capabilities against unpredictable new safety problems.