This paper addresses the security challenges of rapidly evolving large-scale language models (LLMs) that are autonomous agents that cross organizational boundaries. LLM agents collaborate on tasks that require distributed expertise, such as disaster response and supply chain optimization, but this cross-domain collaboration breaks the integrated trust assumptions of traditional sorting and suppression techniques. In isolation, even secure agents can leak secrets or violate policies when receiving messages from untrusted peers, which poses risks posed by emergent multi-agent dynamics rather than classical software bugs. In this paper, we present a security agenda for cross-domain multi-agent LLM systems, including seven new security challenges, possible attacks on each challenge, security evaluation metrics, and future research directions.