This paper proposes the Uncertainty-Guided Open-Set Panoptic Segmentation (ULOPS) framework, which addresses the limitations of existing closed-set LiDAR panoptic segmentation models, which fail to detect unknown object instances. This framework leverages Dirichlet-based evidence learning to model prediction uncertainty and integrates semantic segmentation with uncertainty estimates, embeddings with prototypical associations, and a separate decoder for instance-centric prediction. During inference, uncertainty estimates are utilized to identify and segment unknown instances. To enhance the model's ability to distinguish between known and unknown objects, three uncertainty-based loss functions are introduced: uniform evidence loss, adaptive uncertainty separation loss, and contrastive uncertainty loss. We evaluate the open-set performance by extending the KITTI-360 benchmark and introducing a new open-set evaluation on nuScenes, experimentally demonstrating that the proposed approach outperforms existing open-set LiDAR panoptic segmentation methods.