This paper focuses on model merging, a learning-free solution that integrates multiple task-specific models to address the significant computational and data overhead associated with fine-tuning pre-trained large-scale language models (LLMs) for specialized tasks. To address the safety-utility tradeoff (where enhanced generality compromises safety measures) of existing model merging methods, we identify two root causes: neuron misidentification due to simple parameter size-based selection, and neuron interference between tasks during merging. To address these issues, we propose LED-Merging, a three-step framework that identifies task-specific neurons using gradient-based properties, dynamically selects important neurons through multi-model importance fusion, and decouples conflicting updates through parameter isolation. Extensive experiments on Llama-3-8B, Mistral-7B, and Llama2-13B demonstrate that LED-Merging effectively reduces harmful response rates (31.4% reduction on HarmBench for Llama-3-8B-Instruct) while maintaining 95% usability performance (52.39% accuracy on GSM8K). LED-Merging resolves the safety-usefulness tradeoff and provides a lightweight, learning-free paradigm for building robust multi-task LLMs. The code is available on GitHub.