This paper presents a novel approach, Frank-Wolfe Merging (FW-Merging), to address the limitations of model merging, a data-efficient approach for multi-task learning (MTL). To address the scalability challenges of existing model merging methods when merging multiple models from diverse model sources, FW-Merging formulates model merging as a constraint optimization problem. Inspired by Frank-Wolfe optimization, it linearly approximates the objective function and iteratively selects and merges highly relevant models. FW-Merging can be integrated with existing merging methods to improve performance, and it boasts the advantages of being applicable to diverse model sources and having a consistent memory overhead.