This paper argues that it is crucial for large-scale language model (LLM) agents to recognize instance-level context—verifiable and reusable facts related to specific environment instances, such as object locations, crafting recipes, and local rules—beyond environment-level manuals defining environment interaction interfaces and rules and task-level instructions tied to specific goals. This is because success depends not only on reasoning about global rules or task prompts, but also on accurate and consistent fact-based decision-making. We present a task-agnostic method that defines the Instance-Level Context Learning (ILCL) problem and intelligently prioritizes the next task using a compact TODO forest and executes it using a lightweight plan-act-extract loop. This method automatically generates high-precision context documents that can be reused in downstream tasks and agents, thereby offsetting the initial exploration cost. Experiments with TextWorld, ALFWorld, and Crafter demonstrate consistent gains in both success rate and efficiency, with the average success rate of TextWorld in ReAct increasing from 37% to 95%, and that of IGE increasing from 81% to 95%.