Daily Arxiv

This page organizes papers related to artificial intelligence published around the world.
This page is summarized using Google Gemini and is operated on a non-profit basis.
The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.

TimeMosaic: Temporal Heterogeneity Guided Time Series Forecasting via Adaptive Granularity Patch and Segment-wise Decoding

Created by
  • Haebom

Author

Kuiye Ding, Fanda Fan, Chunyi Hou, Zheya Wang, Lei Wang, Zhengxin Yang, Jianfeng Zhan

Outline

Multivariate time series forecasting is crucial in diverse fields, including finance, transportation, climate, and energy. Existing patch-based methods use fixed-length segmentation to ignore the heterogeneity of local temporal dynamics and forecast heterogeneity. This design loses details in information-dense regions, introduces redundancy in stable segments, and fails to capture the distinct complexity of short- and long-term horizons. To address temporal heterogeneity, this paper proposes a forecasting framework called TimeMosaic. TimeMosaic uses adaptive patch embedding to dynamically adjust the granularity based on local information density, balancing structural clarity and motif reuse while preserving temporal continuity. Furthermore, instead of applying a single uniform decoder, we introduce segment-wise decoding, which treats each forecast horizon as a related subtask and adapts to the difficulty and information requirements of each horizon. Extensive evaluation on benchmark datasets demonstrates that TimeMosaic consistently improves upon existing methods, and a model trained on a large corpus of 321 billion observations achieves competitive performance with state-of-the-art TSFM.

Takeaways, Limitations

Takeaways:
Proposing a TimeMosaic framework to address temporal heterogeneity.
Dynamic granularity control via adaptive patch embedding.
Adapting to the difficulty and information requirements of each prediction horizon through segment-by-segment decoding.
Consistent performance improvements over existing methods.
Achieving state-of-the-art performance for models trained on large datasets.
Limitations:
Limitations is not explicitly mentioned in the paper. (However, it is difficult to identify the specific Limitations from the paper summary alone.)
👍