Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

HeurAgenix: Leveraging LLMs for Solving Complex Combinatorial Optimization Challenges

Created by
  • Haebom

Author

Xianliang Yang, Ling Zhang, Haolong Qian, Lei Song, Jiang Bian

Outline

HeurAgenix is a two-stage hyper-heuristic framework that leverages large-scale language models (LLMs). First, it evolves heuristics and then automatically selects among them. In the heuristic evolution phase, the LLM compares seed heuristic solutions with high-quality solutions to extract reusable evolution strategies. During problem solving, the LLM dynamically selects the most promising heuristic for each problem state based on its cognitive ability. The selector can be either a state-of-the-art LLM or a fine-tuned lightweight model, with the lightweight model having lower inference cost. To mitigate the lack of reliable supervision due to the complexity of combinatorial optimization problems, the lightweight heuristic selector is fine-tuned with a dual-compensation mechanism that jointly leverages signals from selection preferences and state awareness to enable robust selection even in noisy annotations. It is shown through extensive experiments on standard benchmarks that it outperforms existing LLM-based hyper-heuristics and performs similarly or better than expert solvers. The code is available at https://github.com/microsoft/HeurAgenix .

Takeaways, Limitations

Takeaways:
We present a novel framework for automatically designing and selecting heuristic algorithms using LLM.
It outperforms or is comparable to existing LLM-based hyper-heuristics and expert solvers.
It provides a novel approach to solving combinatorial optimization problems.
You can reduce inference costs by using lightweight models.
It shows robust performance even in noisy annotations through a dual compensation mechanism.
Limitations:
It may depend on the performance of the LLM.
May result in poor performance for certain types of combinatorial optimization problems.
Large datasets may be required.
The interpretability of the LLM may be lacking.
Setting the optimal parameters of the dual compensation mechanism is important.
👍