This paper addresses the vulnerability to so-called 'jailbreak attacks', which generate unsafe content by bypassing the safety mechanisms of text-to-image generation models. We point out that existing jailbreak attack methods have limitations such as impractical access requirements, easily detectable unnatural prompts, limited search space, and high system query requirements. To overcome these limitations, we propose JailFuzzer, a novel fuzzing framework based on a large-scale language model (LLM) agent. JailFuzzer consists of three components: a seed pool for initial and jailbreak prompts, a guided mutation engine that generates meaningful mutations, and an oracle function that evaluates the success of jailbreak. It secures efficiency and adaptability through an LLM-based agent. Experimental results show that JailFuzzer generates more natural and semantically consistent prompts than existing methods, reduces detectability, and achieves a high success rate with minimal query overhead. This highlights the need for a robust safety mechanism in the generation model and provides a foundation for further research on sophisticated jailbreak attack defenses. JailFuzzer is open source.