This paper presents a method to combine existing pre-trained expert LLMs (Large Language Models) to efficiently handle large-scale and diverse tasks. To overcome the limitations of existing task-based expert selection methods, we propose a Symbolic-MoE framework that enables instance-level adaptive expert mixing. Symbolic-MoE dynamically selects relevant expert LLMs through a fine-grained approach that focuses on skills, such as algebra in mathematics and molecular biology in biomedical reasoning. Each selected expert generates its own inference, and the results are synthesized into a final high-quality response through an aggregator selected based on its ability to integrate various inference results. To address the high computational overhead of model loading and unloading, we implement a batch strategy that groups instances based on assigned experts to improve efficiency. Our approach outperforms GPT4o-mini and multi-agent approaches on various benchmarks (MMLU-Pro, GPQA, AIME, MedMCQA), achieving an average performance improvement of 8.15% over the best multi-agent baseline model. Additionally, it generalizes well to new tasks and outperforms discussion-based baseline models by not requiring costly multi-round discussions.