Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Physics-informed Imitative Reinforcement Learning for Real-world Driving

Created by
  • Haebom

Author

Hang Zhou, Yihao Qin, Dan Xu, Yiding Ji

Outline

In this paper, we propose a data-driven IRL method based on physics information to address the knowledge transfer challenges of IRL agents in dynamic closed-loop environments. Existing IRL methods suffer from performance degradation due to conflicting objectives of IL and RL, sampling inefficiency, and the complexity of hidden world models and physics laws. The proposed method naturally derives the physical principles of vehicle dynamics during the learning process by using expert demonstration data and exploration data together. Experimental results using the Waymax benchmark show that the proposed method outperforms existing IL, RL, and IRL algorithms, reducing the collision rate by 37.8% and the road departure rate by 22.2%.

Takeaways, Limitations

Takeaways:
Improving knowledge transfer problems in dynamic closed-loop environments through data-driven physical information integration IRL methods.
Improving autonomous driving performance by reducing collision and road departure rates.
Verification of superior performance compared to existing methods on Waymax benchmark.
Limitations:
Further research is needed on the generalization performance of the proposed method.
Applicability evaluation for various environments and vehicle models is required.
Lack of detailed description of the automatic extraction process of physical principles.
👍