In this paper, we propose a structure-aware metric function, the signal Dice Similarity Coefficient (SDSC), for temporal time series self-supervised learning. Many existing self-supervised learning methods use distance-based objective functions such as the mean square error (MSE), which is sensitive to amplitude, invariant to waveform polarity, and scale-infinite, which hinders semantic alignment and reduces interpretability. SDSC addresses this problem by quantifying the structural consistency between temporal signals based on the intersection of encoded amplitudes derived from the Dice Similarity Coefficient (DSC). Although SDSC is defined as a structure-aware metric, it can be used as a loss function for gradient-based optimization by subtracting from 1 and applying a differentiable approximation of the Heaviside function. We also propose a hybrid loss formulation that combines SDSC and MSE to improve stability and, when necessary, preserve amplitude. Experimental results on prediction and classification benchmarks demonstrate that SDSC-based pretraining achieves comparable or better performance than MSE, especially in domain-specific and low-resource scenarios. These results suggest that structural fidelity of signal representation improves semantic representation quality, and that structure-aware metrics should be considered as a viable alternative to existing distance-based methods.