This paper proposes IDEATOR, a novel jailbreak attack method for secure deployment of large-scale Vision-Language Models (VLMs) that exploits model vulnerabilities to induce malicious output. IDEATOR leverages the VLM itself as a powerful adversarial model to generate targeted jailbreak texts, pairing them with jailbreak images generated by a state-of-the-art spreading model. Experimental results show that IDEATOR achieves a 94% attack success rate (ASR) against MiniGPT-4 and also demonstrates high ASR against LLaVA, InstructBLIP, and Chameleon. Furthermore, we introduce VLJailbreakBench, a security benchmark consisting of 3,654 multi-mode jailbreak samples, leveraging IDEATOR's strong transferability and automated processing. Benchmark results against 11 recently released VLMs show significant differences in security alignment.