This page organizes papers related to artificial intelligence published around the world. This page is summarized using Google Gemini and is operated on a non-profit basis. The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.
Prompting for Performance: Exploring LLMs for Configuring Software
Created by
Haebom
Author
Helge Spieker, The Matricon, Nassim Belmecheri, J{\o}rn Eirik Betten, Gauthier Le Bartz Lyan, Heraldo Borges, Quentin Mazouni, Dennis Gross, Arnaud Gotlieb, Mathieu Acher
Outline
This paper explores the potential of leveraging large-scale language models (LLMs) to support configuration optimization for improved software performance. We evaluated LLMs for various configuration options for systems such as compilers, video encoders, and SAT solvers, performing tasks such as identifying relevant options, ranking settings, and recommending high-performance settings. Our experiments revealed that while LLMs sometimes match expert knowledge, they also exhibit hallucinations or superficial inferences. This represents the first step in a systematic evaluation of LLM-based software configuration support solutions.
Takeaways, Limitations
•
Takeaways:
◦
Demonstrates the potential of LLM to optimize software configuration.
◦
For certain tasks and systems, the LLM demonstrates expert-level performance.
◦
Laying the foundation for developing LLM-based software configuration support solutions.
•
Limitations:
◦
LLM performance varies greatly depending on the task and system.
◦
There is a possibility of hallucinations or superficial reasoning.
◦
Systematic evaluation and further research are needed.