This paper proposes the ChordPrompt framework, which enhances the adaptability of pre-trained vision-language models in Continuous Learning (CL) environments. To overcome the limitations of existing prompt learning methods, which focus on class-specific incremental learning and use single-modal prompts, ChordPrompt introduces cross-modal prompts that leverage the interaction between visual and textual prompts, and domain-adaptive text prompts for continuous adaptation across multiple domains. Experimental results on multi-domain incremental learning benchmarks show that ChordPrompt outperforms state-of-the-art methods in zero-shot generalization and subtask performance.