This paper comprehensively reviews research on the use of large-scale language models (LLMs), which possess powerful comprehension and reasoning capabilities, to solve optimization problems. Focusing on their synergy with evolutionary computation, we systematically analyze recent developments and organize them within a structured framework. The research is divided into two main phases: optimization modeling using LLMs and optimization solving using LLMs. The latter is subdivided into three paradigms: using LLMs as a standalone optimization tool, embedding them within optimization algorithms, and using them for algorithm selection and generation. We analyze representative methods within each category, highlight technical challenges, and examine their interactions with existing approaches. We also examine application examples across various fields, including natural science, engineering, and machine learning. By comparing LLM-based methods with existing ones, we highlight key gaps and research challenges, and suggest future directions for developing a self-evolving agent ecosystem for optimization.