Daily Arxiv

This page organizes papers related to artificial intelligence published around the world.
This page is summarized using Google Gemini and is operated on a non-profit basis.
The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.

Beyond Manuals and Tasks: Instance-Level Context Learning for LLM Agents

Created by
  • Haebom

Author

Kuntai Cai, Juncheng Liu, Xianglin Yang, Zhaojie Niu, Xiaokui Xiao, Xing Chen

Instance-Level Context Learning for LLM Agents

Outline

This paper argues that it is crucial for large-scale language model (LLM) agents to recognize instance-level context—verifiable and reusable facts related to specific environment instances, such as object locations, crafting recipes, and local rules—beyond environment-level manuals defining environment interaction interfaces and rules and task-level instructions tied to specific goals. This is because success depends not only on reasoning about global rules or task prompts, but also on accurate and consistent fact-based decision-making. We present a task-agnostic method that defines the Instance-Level Context Learning (ILCL) problem and intelligently prioritizes the next task using a compact TODO forest and executes it using a lightweight plan-act-extract loop. This method automatically generates high-precision context documents that can be reused in downstream tasks and agents, thereby offsetting the initial exploration cost. Experiments with TextWorld, ALFWorld, and Crafter demonstrate consistent gains in both success rate and efficiency, with the average success rate of TextWorld in ReAct increasing from 37% to 95%, and that of IGE increasing from 81% to 95%.

Takeaways, Limitations

We emphasize the importance of instance-level context in addition to traditional environment and task-level context for the success of LLM agents.
We propose a task-agnostic method to define and solve the Instance-Level Context Learning (ILCL) problem.
We present an efficient context learning method using TODO forest for exploration prioritization and plan-act-extract loop.
The effectiveness of the proposed method was experimentally demonstrated in TextWorld, ALFWorld, and Crafter environments.
We demonstrate that the proposed methodology can contribute to improving the performance of existing LLM agents.
Further research is needed to determine the generalizability of the method presented in this paper and its applicability in various environments.
The intricacies of TODO forests and the fine tuning of the plan-act-extract loop may be lacking.
Additional information on the specific implementation and experimental setup of the proposed method may be required.
👍