Unsupervised Domain Adaptive Semantic Segmentation (UDA-SS) aims to transfer supervision from a labeled source domain to an unlabeled target domain. This study integrates UDA-SS research across image and video scenarios, enabling a more comprehensive understanding, synergistic advancements, and efficient knowledge sharing. To this end, we explore integrated UDA-SS from a general data augmentation perspective, presenting a unified conceptual framework that enables improved generalization and cross-fertilization of ideas. Specifically, we propose a Quad-directional Mixup (QuadMix) method that addresses distinct point attributes and feature mismatches through a four-way path for intra- and inter-domain mixing in the feature space. To handle temporal variations in video, we integrate optical flow-based feature aggregation across spatial and temporal dimensions to achieve fine-grained domain alignment.