To address the safety risks arising from injecting domain-specific knowledge into large-scale language models (LLMs) in fine-tuning as a service (FaaS), this paper proposes Fine-Grained Safety Neurons (FGSN) with Training-Free Continual Projection, a method that reduces fine-grained safety risks. To overcome the limitations of existing safety layer mapping methods, we integrate multi-scale interactions between safety layers and fine-grained neurons to localize sparse and accurate fine-grained safety neurons and minimize interference with subtask neurons. We project safety neuron parameters toward the safety direction to enhance model safety and better align with human preferences. Extensive experiments on various fine-tuned LLM models demonstrate that FGSNs significantly reduce harmfulness scores and attack success rates while minimizing parameter modifications, while maintaining model usability. Furthermore, we introduce a task-specific, multidimensional, heterogeneous safety neuron cluster optimization mechanism to achieve continuous defense and generalization capabilities against unpredictable new safety problems.