Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

LLMs for sensory-motor control: Combining in-context and iterative learning

Created by
  • Haebom

Author

J onata Tyska Carvalho, Stefano Nolfi

Outline

This paper proposes a method for controlling an agent implemented using a large-scale language model (LLM) that directly maps continuous observation vectors to continuous action vectors. The LLM generates a control strategy based on textual descriptions of the agent, environment, and goal, and iteratively refines the strategy using performance feedback and sensorimotor data. The effectiveness of this method is validated on classical control tasks from the Gymnasium library and the inverted pendulum task from the MuJoCo library, and its effectiveness is demonstrated even on relatively small models such as GPT-oss:120b and Qwen2.5:72b. This method successfully finds optimal or near-optimal solutions by integrating symbolic knowledge gained through inference with sub-symbolic sensorimotor data collected as the agent interacts with the environment.

Takeaways, Limitations

Takeaways:
A new method for controlling implementation agents using LLM is presented.
Efficient problem solving through integration of symbolic knowledge and subsymbolic sensorimotor data.
Effective performance even in relatively small LLMs
Confirming applicability in various environments (Gymnasium, MuJoCo)
Limitations:
Further research is needed on the generalization performance of the proposed method.
Performance evaluation in more complex and diverse environments is needed.
The size of the LLM used needs to be limited and its applicability to other LLMs needs to be reviewed.
Need to improve the efficiency and stability of the learning process
👍