This paper comprehensively reviews recent research trends in large-scale language models (LLMs), aiming to automate optimization modeling, widely used in various fields for optimal decision-making. It covers the entire technology stack, including data synthesis and fine-tuning of the underlying model, inference frameworks, benchmark datasets, and performance evaluations. Specifically, we analyze the high error rates of existing benchmark datasets, refine the datasets to build a new leaderboard for fair performance evaluation, and build an online portal that integrates the refined datasets, code, and paper repositories. Finally, we present limitations of current methodologies and suggest future research directions.