This paper addresses the safety issue of large-scale language models (LLMs) acting as agents. LLMs fine-tuned to act as agents may be more likely to perform harmful actions and less likely to reject them. To address this, this paper proposes the Prefix Injection Guard (PING) method, which adds automatically generated natural language prefixes to agent responses to encourage rejection of harmful requests. PING uses an iterative approach that optimizes task performance and rejection behavior, and has been shown to significantly improve safety over existing prompting methods in web browsing and code generation tasks. Internal hidden state analysis confirms that prefix tokens play a crucial role in behavior modification. This paper contains content that may be considered unethical or offensive.