Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Retrieval-Augmented Decision Transformer: External Memory for In-context RL

Created by
  • Haebom

Author

Thomas Schmied, Fabian Paischer, Vihang Patil, Markus Hofmarcher, Razvan Pascanu, Sepp Hochreiter

Outline

To overcome the limitations of in-context learning (ICL) in reinforcement learning (RL) environments, this paper proposes the Retrieval-Augmented Decision Transformer (RA-DT), which uses a memory mechanism to retrieve only partial paths relevant to the current context from past experience. RA-DT employs a domain-independent search component that requires no training and outperforms existing methods in grid-world environments, robot simulations, and procedurally generated video games. Notably, it achieves high performance even with short context lengths. This paper identifies the limitations of existing ICL methods in complex environments, suggests future research directions, and presents datasets for the four environments in which it was used.

Takeaways, Limitations

Takeaways:
A novel method (RA-DT) to improve the efficiency of learning in context in reinforcement learning is presented.
Effectively solve long episodic problems in complex environments.
Increased applicability to various environments through domain-independent search mechanisms.
Contribute to activating future research by making relevant datasets public.
Limitations:
The performance evaluation of the proposed method is limited to specific environments (grid world, robot simulation, video game).
Verification of generalization performance in more complex and diverse environments is needed.
Further analysis is needed on the efficiency and scalability of the search mechanism.
👍