This paper argues that the rapid adoption of large-scale language model (LLM) agents and multi-agent systems has enabled unprecedented capabilities in natural language processing and generation, but has also introduced unprecedented security vulnerabilities beyond traditional prompt injection attacks. We present the first comprehensive evaluation of LLM agents as an attack vector that can exploit trust boundaries within agent AI systems to achieve complete computer takeover. We demonstrate that popular LLMs, including GPT-4o, Claude-4, and Gemini-2.5, can be tricked into autonomously installing and executing malware on victim systems by exploiting three attack surfaces: direct prompt injection, RAG backdoor attacks, and inter-agent trust exploitation. Our evaluation of 17 state-of-the-art LLMs reveals a striking vulnerability hierarchy, with 41.2% of models vulnerable to direct prompt injection, 52.9% to RAG backdoor attacks, and 82.4% to inter-agent trust exploitation. In particular, we found that even LLMs that successfully blocked direct malicious commands were able to execute the same payload when requested by their peers, revealing a fundamental flaw in current multi-agent security models. Only 5.9% (1/17) of the tested models were found to be resistant to all attack vectors, with most exhibiting context-dependent security behaviors that create exploitable blind spots. These results highlight the need for increased awareness and research into the security risks of LLMs, and illustrate a paradigm shift in cybersecurity threats, where AI tools themselves are becoming sophisticated attack vectors.