This paper presents a method for improving the performance of gray-box fuzzing for structured data by leveraging a large-scale language model (LLM). LLM's prior knowledge of data transformations and formats is leveraged to generate new valid inputs, and paired mutation seeds are used to fine-tune the model to learn structured formats and mutation strategies. The proposed LLM-based fuzzer, LLAMAFUZZ, demonstrates superior performance compared to existing fuzzers through experiments on the Magma benchmark and various real-world programs, and demonstrates improved code coverage.