Daily Arxiv

This page organizes papers related to artificial intelligence published around the world.
This page is summarized using Google Gemini and is operated on a non-profit basis.
The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.

LLAMAFUZZ: Large Language Model Enhanced Greybox Fuzzing

Created by
  • Haebom

Author

Hongxiang Zhang, Yuyang Rong, Yifeng He, Hao Chen

Outline

This paper presents a method for improving the performance of gray-box fuzzing for structured data by leveraging a large-scale language model (LLM). LLM's prior knowledge of data transformations and formats is leveraged to generate new valid inputs, and paired mutation seeds are used to fine-tune the model to learn structured formats and mutation strategies. The proposed LLM-based fuzzer, LLAMAFUZZ, demonstrates superior performance compared to existing fuzzers through experiments on the Magma benchmark and various real-world programs, and demonstrates improved code coverage.

Takeaways, Limitations

Takeaways:
We significantly improved the performance of structured data fuzzing by leveraging LLM.
We have successfully found various bugs and increased code coverage.
By leveraging the knowledge of LLM, we propose a novel approach to improve fuzzing efficiency.
Limitations:
The specific Limitations stated in the paper was not presented.
👍