Daily Arxiv

This page organizes papers related to artificial intelligence published around the world.
This page is summarized using Google Gemini and is operated on a non-profit basis.
The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.

Hierarchical Reinforcement Learning with Low-Level MPC for Multi-Agent Control

Created by
  • Haebom

Author

Max Studt, Georg Schildbach

Outline

Achieving safe and coordinated behavior in dynamic and constrained environments is a key challenge for learning-based control. This paper proposes a hierarchical framework that combines tactical decision-making via reinforcement learning (RL) and low-level execution via model predictive control (MPC). For multi-agent systems, this means that a high-level policy selects an abstract goal from a structured region of interest (ROI), and MPC ensures dynamically feasible and safe movements. Tested on a predator-prey benchmark, our approach outperforms end-to-end and shielding-based RL-based models in terms of reward, safety, and consistency, highlighting the benefits of combining structured learning and model-based control.

Takeaways, Limitations

Takeaways:
We propose a hierarchical framework that combines tactical decision-making through reinforcement learning (RL) and model predictive control (MPC).
Achieving safe and consistent behavior through collaboration between high-level policies and MPCs in multi-agent systems.
Demonstrated superior performance over end-to-end and shielding-based RL models on predator-prey benchmarks.
Limitations:
The specific Limitations is not stated in the paper (based on the summary).
👍