[공지사항]을 빙자한 안부와 근황 
Show more

Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

From Words to Collisions: LLM-Guided Evaluation and Adversarial Generation of Safety-Critical Driving Scenarios

Created by
  • Haebom

Author

Yuan Gao, Mattia Piccinini, Korbinian Moller, Amr Alanwar, Johannes Betz

Outline

This paper focuses on robust assessment and generation of safety-critical scenarios in scenario-based virtual testing for autonomous vehicles. To overcome the limitations of existing approaches that rely on handcrafted scenarios, we present a novel approach that automatically assesses and generates safety-critical driving scenarios by combining large-scale language models (LLMs) with structured scenario parsing and prompt engineering. We introduce a scenario assessment module using Cartesian and egocentric prompting strategies, and an adversarial generation module that generates risky scenarios by modifying the trajectories of ego-attackers. We validate the effectiveness of our approach using a 2D simulation framework and multiple pre-trained LLMs, and show that the assessment module effectively detects crash scenarios and infers the safety of scenarios, while the generation module identifies high-risk agents and synthesizes realistic safety-critical scenarios. In conclusion, we demonstrate that the LLM with domain-informed prompting techniques can effectively assess and generate safety-critical driving scenarios while reducing the reliance on handcrafted metrics. The developed code and scenarios are publicly available ( https://github.com/TUM-AVS/From-Words-to-Collisions ).

Takeaways, Limitations

Takeaways:
By demonstrating that LLM can be used to automatically assess and generate autonomous driving safety scenarios, we present new possibilities to overcome the difficulties and scalability limitations of manual work.
New techniques such as Cartesian and egocentric prompt strategies and adversarial generation modules enable more effective and realistic safety scenario generation and evaluation.
Publicly available code and scenarios can encourage follow-up research by other researchers.
Limitations:
Currently, it has only been verified in a 2D simulation environment, so further research is needed on its extension to a 3D environment and applicability in real road environments.
Since it depends on the performance of LLM, limitations of LLM (e.g. bias, uncertainty) may affect the results.
Further validation is needed on how comprehensively different types of risk scenarios can be generated and evaluated.
Further research is needed on the interpretability and reliability of the LLM.
👍