This paper investigates the security vulnerabilities of AI agents integrated into Web3 under the context of adversarial threats in real-world scenarios. In particular, we introduce “context manipulation”, a comprehensive attack vector that exploits unprotected context surfaces such as input channels, memory modules, and external data feeds. We demonstrate memory injection, a more stealthy and persistent threat than traditional prompt injection, and empirically demonstrate that malicious injection can cause asset transfers and protocol violations using a decentralized AI agent framework called ElizaOS. By evaluating more than 150 blockchain tasks and 500 attack test cases using CrAIBench, a Web3-centric benchmark, we confirm that AI models are more vulnerable to memory injection, and show that prompt injection defenses and detectors as defensive strategies provide only limited protection, while fine-tuning-based defenses significantly reduce the attack success rate.