This paper explores model merging, a key technique for improving the performance and efficiency of large-scale language models (LLMs). While the open-source community has repeatedly merged existing models to drive model evolution, a systematic understanding of the benefits and underlying factors of model merging remains lacking. Drawing analogies to biological evolution, this study explores model evolution through iterative merging and introduces the concept of "model kinship," which represents the degree of similarity or relatedness between LLMs. Empirical analysis demonstrates that model kinship is closely related to the performance gains from merging, providing a useful criterion for selecting candidate models. Based on these insights, we propose "Top-k Greedy Merge with Model Kinship Consideration," a novel model merging strategy that uses model kinship as a guide to mitigate performance degradation and promote effective model evolution.