Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Visual SLAMMOT Considering Multiple Motion Models

Created by
  • Haebom

Author

Peilin Tian, Hao Li

Outline

This paper discusses the integration of simultaneous localization and mapping (SLAM) and multi-object tracking (MOT), which play a crucial role in autonomous driving. Conventional SLAM and MOT are processed independently, resulting in limited accuracy. Specifically, SLAM assumes a static environment, while MOT tends to rely on vehicle position information. To address these issues, the research team proposed a LiDAR-based SLAMMOT that considers multiple motion models in a previous study (IMM-SLAMMOT). In this paper, we extend this approach to a vision-based system and propose a visual SLAMMOT. The goal of this paper is to verify the feasibility and advantages of visual SLAMMOT that considers multiple motion models.

Takeaways, Limitations

Takeaways:
We demonstrate the applicability of LiDAR-based SLAMMOT by extending its advantages to vision-based systems.
Suggesting that SLAMMOT, which considers multiple motion models, can contribute to improving the accuracy of autonomous driving systems that utilize visual information.
Highlighting the utility of an approach that integrates SLAM and MOT.
Limitations:
There is a lack of specific details on the performance evaluation of the visual SLAMMOT presented in this paper.
Further verification of robustness in various environments and conditions is needed.
Further research is needed to apply it to actual autonomous driving systems.
👍