Daily Arxiv

This page organizes papers related to artificial intelligence published around the world.
This page is summarized using Google Gemini and is operated on a non-profit basis.
The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.

TreeIRL: Safe Urban Driving with Tree Search and Inverse Reinforcement Learning

Created by
  • Haebom

Author

Momchil S. Tomov, Sang Uk Lee, Hansford Hendrago, Jinwook Huh, Teawon Han, Forbes Howington, Rafael da Silva, Gianmarco Bernasconi, Marc Heim, Samuel Findler, Xiaonan Ji, Alexander Boule, Michael Napoli, Kuo Chen, Jesse Miller, Boaz Floor, Yunqing Hu

Outline

TreeIRL is a novel planner for autonomous driving that combines Monte Carlo Tree Search (MCTS) and Inverse Reinforcement Learning (IRL) to achieve state-of-the-art performance in both simulation and real-world driving. MCTS is used to identify a set of safe candidate trajectories, and a deep IRL scoring function is used to select the most human-like trajectory among them. TreeIRL has been tested in large-scale simulations and over 500 miles of real-world autonomous driving environments in the Las Vegas metropolitan area. Test scenarios include congested urban traffic, adaptive cruise control, cut-ins, and traffic lights. TreeIRL achieved the best overall performance by balancing safety, progress, comfort, and human-likeness.

Takeaways, Limitations

The MCTS-based scheme was demonstrated for the first time on public roads.
It emphasizes the importance of evaluating planners against various metrics and in real-world environments.
Further improvements are possible with reinforcement learning and imitation learning, and it provides a framework for exploring various combinations of classical and learning-based approaches to address planning bottlenecks in autonomous driving.
👍