This paper highlights that multimodal large-scale language models (MLLMs), which excel at visual-language understanding, struggle to adapt to dynamic real-world environments that require the continuous integration of new knowledge and skills. To address this challenge, we present MLLM-CL, a new benchmark that incorporates domain- and skill-continuous learning. MLLM-CL addresses domain-continuous learning, which performs independent and identically distributed (IID) evaluations across evolving mainstream domains, and skill-continuous learning, which includes non-IID scenarios to evaluate new model capabilities. Furthermore, we propose a method to prevent detrimental interference through parameter isolation and an MLLM-based routing mechanism. Experimental results demonstrate that the proposed method significantly outperforms existing methods and integrates domain-specific knowledge and functional skills with minimal forgetting.