Despite recent improvements in the performance of text-to-image (T2I) models, this paper raises concerns about the generation of NSFW content, including sexually suggestive, violent, politically sensitive, and offensive images. To address this, we present PromptGuard, a novel content moderation technique. Inspired by the system prompt mechanism of large-scale language models (LLMs), PromptGuard optimizes safe soft prompts (P*), which serve as implicit system prompts within the text embedding space of T2I models. This enables safe and realistic image generation without compromising inference efficiency or requiring proxy models. Furthermore, we optimize category-specific soft prompts and integrate them to provide safety guidance, enhancing reliability and usability. Extensive experiments on five datasets demonstrate that PromptGuard effectively mitigates NSFW content generation while maintaining high-quality positive output. It achieves 3.8x speedup over existing methods and reduces the optimal unsafe rate to 5.84%, outperforming eight state-of-the-art defenses.