This paper proposes a defense mechanism called PromptKeeper to address the security concerns surrounding system prompts that guide the output of large-scale language models (LLMs). System prompts often contain business logic and sensitive information, making them vulnerable to exploitation of LLM vulnerabilities through malicious or common user queries. PromptKeeper addresses two key challenges: reliably detecting prompt leaks and mitigating side-channel vulnerabilities when leaks occur. By framing leak detection as a hypothesis testing problem, it effectively identifies both explicit and subtle leaks. When a leak is detected, it regenerates responses using dummy prompts, making them indistinguishable from normal interactions without leaks. Consequently, it provides robust protection against prompt extraction attacks via malicious or common queries, while maintaining the conversational capabilities and execution efficiency of typical user interactions.