This is a page that curates AI-related papers published worldwide. All content here is summarized using Google Gemini and operated on a non-profit basis. Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.
DisMS-TS: Eliminating Redundant Multi-Scale Features for Time Series Classification
Created by
Haebom
Author
Zhipeng Liu, Peibo Duan, Binwu Wang, Xuan Tang, Qi Chu, Changsheng Zhang, Yongsheng Huang, Bin Zhang
Outline
In this paper, we propose DisMS-TS, a novel end-to-end framework based on multi-scale analysis to solve the classification problem of real-world time series data with diverse temporal changes. To overcome the problem that existing methods suffer from poor performance due to the inability to remove redundant scale-shared features in multi-scale time series, we capture scale-shared and scale-specific temporal representations through temporal separation modules, respectively. By adding two regularization terms, it is designed to learn effectively at all time scales by ensuring the consistency of scale-shared representations and the difference of scale-specific representations. Experimental results on various datasets show that DisMS-TS improves the accuracy by up to 9.71% over existing methods.
Takeaways, Limitations
•
Takeaways:
◦
A novel methodology for improving the performance of time series classification based on multi-scale analysis
◦
Improve model performance by effectively separating scale-sharing and scale-specific features.
◦
Demonstrated superior performance over existing methods on various datasets (up to 9.71% improvement)
•
Limitations:
◦
Lack of analysis on the computational complexity and time efficiency of the proposed method.
◦
Need for additional evaluation of generalization performance for different types of time series data
◦
Further review is needed for the possibility of overfitting to specific datasets.