This paper presents research to ensure the safety of large-scale language models (LLMs). We highlight vulnerabilities in the safety sorting mechanism and propose the "Superficial Safety Sorting Hypothesis" (SSAH), which posits that safety sorting can be interpreted as a binary classification problem that either accepts or rejects a user request. Based on this hypothesis, we identify key elements for maintaining safety and successfully identify four types of property-critical components: safety-critical units (SCUs), usability-critical units (UCUs), composite units (CUs), and redundant units (RUs). Specifically, we demonstrate that fixing specific safety-critical components during fine-tuning can maintain safety properties while adapting to new tasks. Furthermore, we demonstrate that redundant units in pre-trained models can be utilized as an "alignment budget" to minimize alignment costs while achieving alignment goals. In conclusion, we emphasize that the smallest functional unit for LLM safety is the neuron level, demonstrating that safety sorting need not be complex.