Advances in large-scale language models (LLMs) have impacted various fields, but have also increased the potential for malicious users to exploit harmful or jailbreak prompts. This paper proposes QGuard, a simple and effective security guard method that leverages question prompts to block harmful prompts. QGuard defends against both text-based and multimodal harmful prompt attacks, and is robust to modern harmful prompts without fine-tuning. Experimental results demonstrate that QGuard performs competitively on text-based and multimodal harmful datasets. Furthermore, question prompting analysis enables white-box analysis of user input.