This paper addresses a serious privacy threat posed by the widespread deployment of Large-Scale Language Model (LLM)-based agents: malicious agents interacting with other agents to extract sensitive information. Dynamic conversations enable adaptive attack strategies, potentially leading to significant privacy violations. However, their evolving nature makes it difficult to manually anticipate and discover sophisticated vulnerabilities. To address this issue, this paper presents a search-based framework that simulates privacy-critical agent interactions to improve guidance for attackers and defenders. Each simulation involves three roles: a data subject, a data sender, and a data receiver. While the data subject's behavior is fixed, the attacker (data receiver) attempts to extract sensitive information from the defender (data sender) through continuous, interactive interactions. To efficiently explore this interaction space, our search algorithm utilizes LLMs as an optimizer, utilizing multi-threaded parallel search and inter-thread propagation to analyze simulation paths and iteratively propose new guidance. Through this process, we discovered that attack strategies escalate from simple direct requests to sophisticated multi-step tactics such as impersonation and consent forgery, while defenses evolve from rule-based constraints to identity verification state machines. The discovered attacks and defenses transfer across a variety of scenarios and backbone models, demonstrating their robust practicality for building privacy-aware agents.