This paper presents BadPromptFL, the first backdoor attack on prompt-based federated learning (PromptFL) in multimodal contrastive learning models. BadPromptFL involves a compromised client jointly optimizing local backdoor triggers and prompt embeddings to inject malicious prompts into the global aggregation process. These malicious prompts are then propagated to benign clients, enabling universal backdoor activation during inference without modifying model parameters. Leveraging the contextual learning behavior of the CLIP-style architecture, BadPromptFL achieves a high attack success rate (e.g., >90%) with minimal visibility and limited client involvement. Extensive experiments on various datasets and aggregation protocols demonstrate the effectiveness, stealth, and generalizability of this attack, raising serious concerns about the robustness of prompt-based federated learning in real-world deployments.