Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

From Entanglement to Alignment: Representation Space Decomposition for Unsupervised Time Series Domain Adaptation

Created by
  • Haebom

Author

Rongyao Cai, Ming Jin, Qingsong Wen, and Kexin Zhang

Outline

This paper proposes DARSD, a novel framework for unsupervised domain adaptation (UDA) from the perspective of representation space decomposition, to address the domain shift problem arising in time-series analysis. Unlike existing UDA methods that treat features as independent entities, DARSD considers the inherent composition of features and separates transferable knowledge from mixed representations. DARSD consists of three main components: first, an adversarially learnable common invariant basis that projects source features into a domain-invariant subspace; second, a circular pseudo-labeling mechanism that dynamically separates target features based on confidence; and third, a hybrid contrastive learning strategy that enhances feature clustering and consistency while mitigating distributional gaps. Experimental results on four benchmarks (WISDM, HAR, HHAR, and MFD) show that DARSD outperforms 12 other UDA algorithms, achieving optimal performance in 35 of 53 scenarios and ranking first across all benchmarks.

Takeaways, Limitations

Takeaways:
We present a novel UDA framework, DARSD, based on representation space decomposition to effectively address the domain shift problem in temporal time series analysis.
We overcome the limitations of existing methods and simultaneously achieve domain-invariant feature extraction and separation of transferable knowledge.
Its practicality has been proven by demonstrating excellent performance in various benchmarks.
Limitations:
The performance improvements of the proposed method may be limited to specific benchmark datasets. Additional experiments on more diverse and extensive datasets are needed.
Detailed descriptions of the detailed parameter tuning of hybrid contrastive learning strategies may be lacking. Further analysis is needed to determine optimal hyperparameter settings.
Computational costs can be high. Research is needed to improve efficiency for real-time applications.
👍