This page organizes papers related to artificial intelligence published around the world. This page is summarized using Google Gemini and is operated on a non-profit basis. The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.
MmWave Radar-Based Non-Line-of-Sight Pedestrian Localization at T-Junctions Utilizing Road Layout Extraction via Camera
Created by
Haebom
Author
Byeonggyu Park, Hee-Yeun Kim, Byonghyok Choi, Hansang Cho, Byungkwan Kim, Soomok Lee, Mingu Jeon, Seong-Woo Kim
Outline
To estimate pedestrian locations in the non-line-of-sight (NLoS) region, a critical challenge in autonomous driving systems, we propose a novel framework that integrates mmWave radar and cameras. The proposed method utilizes road layout information acquired from the cameras to interpret 2D radar point clouds (PCDs), reconstructing the spatial scene and estimating NLoS pedestrian locations. The practicality of the proposed method is verified using a dataset acquired using a radar-camera system mounted on an actual vehicle.
Takeaways, Limitations
•
Takeaways:
◦
Improve the accuracy of radar data in NLoS environments by leveraging the camera's visual information.
◦
The practicality of the methodology is demonstrated through a dataset collected in a real autonomous driving environment.
◦
We present the possibility of estimating pedestrian location in NLoS environments through the fusion of mmWave radar and cameras.
•
Limitations:
◦
Absence of mention of specific performance metrics (accuracy, precision, etc.).
◦
Lack of detailed explanation of how cameras and radars are fused, especially the data fusion step.
◦
Further research is needed on generalization performance in diverse NLoS environments (e.g., complex urban environments, diverse obstacles).