This paper proposes MoE-CL, a parameter-efficient adversarial mixture of experts (MoE) framework for continuous learning (CL) of large-scale language models (LLMs) to address the diverse and constantly evolving tasks in industrial settings. To address the forgetting problem, a critical weakness of existing CL approaches, MoE-CL adopts a dual-expert design that utilizes task-specific experts and shared experts. Task-specific experts maintain knowledge specific to each task, while shared experts facilitate transfer across tasks. A generative adversarial network (GAN)-based task-aware discriminator is integrated to prevent shared experts from imparting task-irrelevant noise. Through adversarial learning, shared experts learn generalized representations, while task-specific experts retain task-specific details, achieving a balance between knowledge retention and cross-task generalization. We validate the effectiveness and practicality of MoE-CL through experiments on the public MTL5 benchmark, the Tencent3 industrial benchmark, and A/B testing on the Tencent Video platform's content compliance review system.