This paper conducts a systematic review of the applications of large-scale language models (LLMs) to combinatorial optimization (CO). We report our findings following the PRISMA guidelines, searching over 2,000 publications via Scopus and Google Scholar. We evaluate the publications based on four inclusion and four exclusion criteria related to language, research focus, year of publication, and type, ultimately selecting 103 studies. We categorize the selected studies into semantic categories and topics, providing a comprehensive overview of the field, including what LLMs do, the architecture of LLMs, existing datasets specifically designed to evaluate LLMs in CO, and their application areas. Finally, we suggest future directions for the use of LLMs in this field.