This paper introduces Ad Injection Attacks (AEA), a novel security threat to large-scale language models (LLMs). AEA covertly injects promotional or malicious content into model output and AI agents through two low-cost vectors: exploiting third-party service distribution platforms to add adversarial prompts or publishing open-source checkpoints with backdoors fine-tuned with attacker data. Unlike traditional accuracy-degrading attacks, AEA compromises information integrity, causing the model to appear benign but secretly return advertisements, propaganda, or hate speech. This paper details the attack pipeline, maps five stakeholder victim groups, and presents an early prompt-based self-checking defense that mitigates these injections without additional model retraining. Our findings highlight urgent and unresolved challenges in LLM security, calling for coordinated detection, auditing, and policy response from the AI safety community.