Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

JEPA4Rec: Learning Effective Language Representations for Sequential Recommendation via Joint Embedding Predictive Architecture

Created by
  • Haebom

Author

Minh-Anh Nguyen, Dung D. Le

Outline

JEPA4Rec is a novel framework for sequential recommendation systems, proposed to address the challenges of a lack of understanding of common user preferences and data insufficiency. It converts descriptive information, such as item titles and categories, into sentences and uses a bidirectional Transformer encoder to learn semantically rich and transferable representations. Using masking techniques and self-supervised learning, it learns generalized item embeddings and improves recommendation performance. It outperforms existing state-of-the-art methods on several real-world datasets, demonstrating its effectiveness in cross-domain, cross-platform, and low-resource environments.

Takeaways, Limitations

Takeaways:
Presenting a novel approach that effectively addresses the problems of data scarcity and lack of understanding of general user preferences.
Improving recommendation performance by learning semantically rich and transferable item representations.
Efficient model learning and performance improvement through self-supervised learning.
Excellent performance across domains, platforms, and low-resource environments.
Limitations:
Potential increase in computational cost due to the complexity of the proposed model.
Possibility of information loss during conversion to sentences.
It may only perform well on certain types of data (further research on generalization performance is needed).
Lack of detailed analysis of the characteristics of the dataset used.
👍