This paper proposes a deep unrolling algorithm using deep neural networks to address the problem of single-photon LiDAR imaging in noisy environments with multiple targets. Existing statistical methods, while highly interpretable, struggle to handle complex scenes. Deep learning-based methods, while offering excellent accuracy and robustness, lack interpretability or are limited to processing only a single peak per pixel. In this study, we propose a deep unrolling algorithm that extracts features from point clouds by introducing a hierarchical Bayesian model and a dual depth map representation, utilizing geometric deep learning. This algorithm combines the advantages of statistical and learning-based methods to achieve both accuracy and uncertainty quantification. Experimental results on synthetic and real-world data demonstrate competitive performance compared to existing methods, even providing uncertainty information.