Daily Arxiv

This page organizes papers related to artificial intelligence published around the world.
This page is summarized using Google Gemini and is operated on a non-profit basis.
The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.

A theoretical framework for self-supervised contrastive learning for continuous dependent data

Created by
  • Haebom

Author

Alexander Marusov, Alexandr Yugay, Alexey Zaytsev

Outline

This paper presents a novel approach to applying Self-Supervised Learning (SSL), a powerful representation learning method in computer vision, to dependent data such as time series and spatiotemporal data. To overcome the limitations of existing contrastive SSL methods, which assume semantic independence between samples, we propose a new theoretical framework for contrastive SSL for continuous dependent data. This framework allows for the closest samples to be semantically close and introduces two possible ground truth similarity measures: hard and soft proximity. Based on this, we derive an analytical form of the estimated similarity matrix that accommodates both types of proximity between samples and propose a dependency-aware loss function. The proposed methodology, "Dependent TS2Vec," is validated on time series and spatiotemporal problems, outperforming state-of-the-art methods for dependent data, demonstrating the effectiveness of this theoretically grounded loss function.

Takeaways, Limitations

Takeaways:
A new theoretical framework for contrastive SSL for continuous dependent data is presented.
Development of a dependency-aware loss function leveraging hard and soft proximity.
Achieved SOTA performance on time series and spatiotemporal data (4.17% and 2.08% accuracy improvement on UEA and UCR benchmarks, respectively, and 7% ROC-AUC improvement on drought classification task).
Limitations:
No specific Limitations mentioned in the paper.
👍