This paper provides a comprehensive review of uncertainty modeling in deep learning-based semantic segmentation. Despite recent advances in semantic segmentation, most models relax the Bayesian assumptions, thereby omitting important uncertainty information needed for decision making. This reliance on point estimates has sparked interest in probabilistic segmentation, but related research remains fragmented. This paper integrates and contextualizes the fundamental concepts of uncertainty modeling, including the work of distinguishing epistemic uncertainty from aleatory uncertainty, and highlights its role in four major sub-segmentation tasks such as active learning. It provides a consistent foundation for researchers by integrating theory, terminology, and applications, and identifies important challenges such as strong assumptions of spatial aggregation, lack of standardized benchmarks, and pitfalls of current uncertainty quantification methods. We observe trends such as the adoption of generative models and increasing interest in distribution- and sampling-free approaches to uncertainty estimation. We also suggest directions for advancing uncertainty-aware segmentation in deep learning, including practical strategies for separating different sources of uncertainty, novel uncertainty modeling approaches, and improved Transformer-based backbones. Ultimately, we aim to support the development of more reliable, efficient, and interpretable segmentation models.