This paper presents the results of a comprehensive study evaluating the security vulnerabilities of autonomous agents based on large-scale language models (LLMs). We demonstrate that autonomous agents using LLMs as inference engines can exploit various attack vectors (direct prompt injection, RAG backdoors, and inter-agent trust) to achieve full system takeover. Experiments on 18 state-of-the-art LLMs, including GPT-4, Claude-4, and Gemini-2.5, reveal that the majority of these models are vulnerable to direct prompt injection and RAG backdoor attacks, as well as to attacks exploiting inter-agent trust relationships. This represents a paradigm shift in cybersecurity threats, suggesting that AI tools themselves can be leveraged as sophisticated attack vectors.