In this paper, we propose a model merging framework as an efficient method for improving the inference capability of large-scale language models (LLMs). Existing model merging methods rely on manual strategies for hyperparameter tuning, which limits the exploration of potential model combinations and requires a lot of effort. In this paper, we present an automated model merging framework that enables fine-tuned exploration of merging strategies while reducing costs through multi-fidelity approximations. It supports single- and multi-objective optimizations and introduces two new search spaces: layer-wise fusion (LFS) and depth-wise integration (DIS). Evaluation results on various benchmarks show that the proposed framework autonomously finds merges that further improve single-objective performance even on tasks where the model is already fine-tuned, and merges that optimize the multi-objective frontier across multiple tasks. Effective merges can be found even with limited computational resources (e.g., less than 500 exploration steps).