Sparse Mixture-of-Experts (MoE) is a key architecture for efficiently scaling large-scale language models (LLMs). This study highlights the issue that pre-training routers are optimized for stability and robustness, limiting model performance and efficiency. To address this, we propose a post-training strategy, Ban & Pick, without retraining or architecture changes. Pick identifies and strengthens key experts that significantly impact performance, improving accuracy. Ban dynamically removes redundant experts based on layer and token sensitivity to accelerate inference. Experiments on fine-grained MoE-LLMs, such as DeepSeek and Qwen3, demonstrate that Ban & Pick achieves improved accuracy and accelerated inference without retraining or architecture changes.