This paper presents a comprehensive study of emerging security threats faced by large-scale language models (LLMs) deployed in enterprise environments (e.g., Microsoft 365 Copilot), specifically multi-round prompt inference attacks. We simulate realistic attack scenarios in which an attacker exploits LLMs integrated into enterprise sensitive data (e.g., SharePoint documents or emails) using questions that do not reveal malicious intent and indirect prompt injection. We develop and analyze a formal threat model for multi-round inference attacks using probability theory, an optimization framework, and information-theoretic leakage bounds. We show that the attacks reliably leak sensitive information in the context of LLMs even when standard safeguards are implemented. In this paper, we propose and evaluate defense techniques including statistical anomaly detection, fine-grained access control, prompt hygiene techniques, and architectural modifications to the LLM deployment. Each defense is supported by mathematical analysis or experimental simulations. For example, we derive bounds on information leakage in differential privacy-based training and show an anomaly detection method that flags multi-round attacks with high AUC. We also introduce an approach called “spotlighting” that uses input transformations to isolate untrusted prompt content and reduce attack success rates by a factor of 10. Finally, we provide a formal proof of concept and empirical validation of a combined defense-in-depth strategy. This study highlights that securing LLM in enterprise environments requires moving beyond single-shot prompt filtering to a holistic, multi-stage view of both attack and defense.