This page organizes papers related to artificial intelligence published around the world. This page is summarized using Google Gemini and is operated on a non-profit basis. The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.
This paper presents a knowledge editing (KE) method for correcting outdated or incorrect information in large-scale language models (LLMs). While existing KE methods can update individual facts, they often fail to generalize to multi-stage inference tasks that rely on updated knowledge. By analyzing the inference circuits—the neural pathways LLMs use for knowledge-based reasoning—we find that existing layer-localized KE approaches (e.g., MEMIT, WISE), which edit only a single or a few model layers, fail to properly integrate updated knowledge into these inference pathways. To address this limitation, we present Circuit-Aware Knowledge Editing (CaKE), a novel method that enhances the effective integration of updated knowledge in LLMs. By utilizing only a small number of carefully selected data samples guided by circuit-based analysis, CaKE stimulates the model to develop appropriate inference circuits for the newly integrated knowledge. Experimental results show that CaKE enables more accurate and consistent use of compiled knowledge across related inference tasks, improving multi-stage inference accuracy by an average of 20% on the MQuAKE dataset while consuming less memory than existing KE methods. Code and data are available under https://github.com/zjunlp/CaKE에서 .
Takeaways, Limitations
•
Takeaways:
◦
We present a novel knowledge editing method, CaKE, based on the analysis of LLM's inference circuits.