This paper systematically analyzes poisoning attacks on the textual inversion (TI) technique of diffusion models (DMs). First, we present Semantic Sensitivity Maps, a novel method for visualizing the impact of poisoning attacks on text embeddings. Next, we experimentally demonstrate that DMs exhibit nonuniform learning behavior across time steps, particularly focusing on low-noise samples. Poisoning attacks leverage this bias by injecting adversarial signals primarily at low time steps. Finally, we observe that adversarial signals disrupt learning from relevant conceptual regions during training, thereby compromising the TI process. Based on these insights, we propose Safe-Zone Training (SZT), a novel defense mechanism comprised of three main components: 1. attenuation of high-frequency poisoning signals via JPEG compression; 2. restriction of high-time steps to avoid adversarial signals at low time steps; and 3. loss masking to restrict learning to relevant regions. Through extensive experiments on various poisoning attacks, we show that SZT significantly improves the robustness of TI against all poisoning attacks and improves the generation quality over previously published defenses.