This page organizes papers related to artificial intelligence published around the world. This page is summarized using Google Gemini and is operated on a non-profit basis. The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.
Tuning LLM-based Code Optimization via Meta-Prompting: An Industrial Perspective
Created by
Haebom
Author
Jingzhi Gong, Rafail Giavrimis, Paul Brookes, Vardan Voskanyan, Fan Wu, Mari Ashiga, Matthew Truscott, Mike Basios, Leslie Kanthan, Jie Xu, Zheng Wang
Meta-Prompted Code Optimization (MPCO)
Outline
This paper presents research on automatic code optimization leveraging multiple large-scale language models (LLMs). Specifically, to address the challenge of model-specific prompt engineering—where prompts optimized for a specific LLM fail on other LLMs—we propose the Meta-Prompted Code Optimization (MPCO) framework. MPCO dynamically generates context-aware optimization prompts by integrating project metadata, task requirements, and LLM-specific context. A core component of the ARTEMIS code optimization platform, MPCO's effectiveness is demonstrated through 366 hours of runtime benchmarking on five real-world codebases. MPCO achieves up to 19.06% performance improvement over baseline methods, with 96% of optimizations resulting from meaningful edits.
Takeaways, Limitations
•
Takeaways:
◦
MPCO automatically generates high-quality, task-specific prompts that work across a variety of LLMs, enabling practical deployment of systems leveraging multiple LLMs.
◦
We found that comprehensive contextual integration is essential for effective meta-prompting.
◦
We demonstrate that key LLMs can serve as effective meta-prompters, providing useful insights to practitioners in the industry.
◦
We demonstrate the practicality of MPCO by demonstrating significant performance improvements in real codebases.
•
Limitations:
◦
The specific Limitations is not specified in the paper. (Judging solely from the Abstract)