GIFT presents a gradient-aware immunity technique to defend diffusion models against malicious fine-tuning. Safety mechanisms such as traditional safety checkers can be easily bypassed, and concept deletion methods fail under adversarial fine-tuning. GIFT addresses this problem by framing immunization as a bi-level optimization problem. The high-level objective is to use representational noise and maximization to degrade the model’s ability to represent harmful concepts, while the low-level objective is to maintain performance on safe data. GIFT achieves robust resistance to malicious fine-tuning while maintaining safe generation quality. Experimental results show that the proposed method significantly impairs the model’s ability to relearn harmful concepts while maintaining performance on safe content, suggesting a promising direction for building intrinsically safe generative models that are resilient to adversarial fine-tuning attacks.