This paper proposes DS²Net, a novel deep supervised network for medical image segmentation. Unlike previous studies that supervise either low-level fine-grained features or high-level semantic features, DS²Net simultaneously supervises both low-level fine-grained features and high-level semantic features through a fine-grained feature enhancement module (DEM) and a semantic feature enhancement module (SEM). The DEM and SEM, respectively, utilize low-level and high-level feature maps to generate fine-grained and semantic masks, enhancing feature supervision. Furthermore, we introduce an uncertainty-based supervision loss to adaptively allocate supervision strength to features at each scale, addressing the inefficient heuristic design challenges of previous studies. Through extensive experiments on six medical image benchmarks, including colonoscopy, ultrasound, and microscopy images, we demonstrate that DS²Net outperforms state-of-the-art methods.