This paper proposes DARSD, a novel framework for unsupervised domain adaptation (UDA) from the perspective of representation space decomposition, to address the domain shift problem arising in time-series analysis. Unlike existing UDA methods that treat features as independent entities, DARSD considers the inherent composition of features and separates transferable knowledge from mixed representations. DARSD consists of three main components: first, an adversarially learnable common invariant basis that projects source features into a domain-invariant subspace; second, a circular pseudo-labeling mechanism that dynamically separates target features based on confidence; and third, a hybrid contrastive learning strategy that enhances feature clustering and consistency while mitigating distributional gaps. Experimental results on four benchmarks (WISDM, HAR, HHAR, and MFD) show that DARSD outperforms 12 other UDA algorithms, achieving optimal performance in 35 of 53 scenarios and ranking first across all benchmarks.