Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

CMD-HAR: Cross-Modal Disentanglement for Wearable Human Activity Recognition

Created by
  • Haebom

Author

Hanyu Liu, Siyao Li, Ying Yu, Yixuan Jiang, Hang Xiao, Jingxi Long, Haotian Tang, Chao Li

Outline

This paper aims to address the challenges of multimodal data mixing, activity heterogeneity, and complex model deployment in sensor-based human activity recognition (HAR). To this end, we propose a spatial-temporal attention modal decomposition alignment fusion strategy to address the mixed distribution of sensor data, capture key discriminative features of activities through multimodal spatial-temporal separate representations, and combine gradient modulation to mitigate data heterogeneity. In addition, we build a wearable deployment simulation system and conduct experiments on a number of public datasets to demonstrate the effectiveness of our model.

Takeaways, Limitations

Takeaways:
A novel spatial-temporal attention-based fusion strategy for effectively processing multi-modal sensor data is presented.
Applying gradient modulation techniques to alleviate activity heterogeneity problems
Building a simulation system for model deployment in a wearable environment
Validate model performance through experiments using various public datasets
Limitations:
Further verification of the generalization performance of the proposed model is needed.
Performance evaluation and Limitations analysis in real wearable environment required
Need to review applicability to more diverse and complex activities
👍