Prompt injection attacks pose a serious challenge to the secure deployment of large-scale language models (LLMs) in real-world applications. To address this issue, the authors propose AEGIS, an automated co-evolutionary framework for defending against prompt injection attacks. Attacking and defending prompts are iteratively co-optimized using a text gradient optimization (TGO) module, leveraging feedback from an LLM-based evaluation loop. On real-world task scoring datasets, AEGIS consistently outperforms existing baselines, achieving superior robustness in both attack success rate and detection.