This paper addresses how recent advances in image generation using large-scale text-based conditional diffusion models present unexplored attack opportunities to bias images generated by adversaries for malicious purposes (e.g., public opinion manipulation and propaganda). We explore an attack vector that allows an adversary to inject arbitrary biases into a target model by exploiting a low-cost backdooring technique that uses specific natural language triggers contained in a small sample of malicious data generated by publicly generated models. The adversary can select common word sequences that users can unintentionally activate during inference. Extensive experiments using over 200,000 generated images and hundreds of fine-tuned models demonstrate the feasibility of the proposed backdoor attack, highlighting that such biases maintain strong text-image alignment, and demonstrate the difficulty of detecting biased images without prior knowledge of the bias. Cost analysis confirms the low financial barrier to executing such attacks ($10-$15), emphasizing the need for robust defense strategies against such vulnerabilities in diffusion models.