Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Large Language Models for Combinatorial Optimization: A Systematic Review

Created by
  • Haebom

Author

Francesca Da Ros, Michael Soprano, Luca Di Gaspero, Kevin Roitero

Outline

This paper conducts a systematic review of the applications of large-scale language models (LLMs) to combinatorial optimization (CO). We report our findings following the PRISMA guidelines, searching over 2,000 publications via Scopus and Google Scholar. We evaluate the publications based on four inclusion and four exclusion criteria related to language, research focus, year of publication, and type, ultimately selecting 103 studies. We categorize the selected studies into semantic categories and topics, providing a comprehensive overview of the field, including what LLMs do, the architecture of LLMs, existing datasets specifically designed to evaluate LLMs in CO, and their application areas. Finally, we suggest future directions for the use of LLMs in this field.

Takeaways, Limitations

Takeaways:
We provide a comprehensive overview by systematically analyzing research trends applying LLM to combinatorial optimization problems.
We analyze various aspects of LLM, including its architecture, datasets, and application areas.
We present future directions for utilizing LLM in combinatorial optimization.
Limitations:
Using only two databases, Scopus and Google Scholar, may limit the scope of your search.
Because we selected studies based on specific criteria, it is possible that some relevant studies were missed.
Future direction projections may include speculative elements.
👍