This paper presents BadPromptFL, a novel backdoor attack on prompt-based federated learning (PromptFL) in multimodal contrastive learning models. BadPromptFL injects malicious prompts into the global aggregation process by having compromised clients jointly optimize local backdoor triggers and prompt embeddings. These malicious prompts are then propagated to benign clients, enabling universal backdoor activation during inference without modifying model parameters. Leveraging the contextual learning behavior of a CLIP-style architecture, BadPromptFL achieves a high attack success rate (e.g., >90%) with minimal visibility and limited client involvement. Extensive experiments on diverse datasets and aggregation protocols demonstrate the attack's effectiveness, stealth, and generalizability, raising serious concerns about the robustness of prompt-based federated learning in real-world deployments.