Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

LLM - Enhanced Linear Autoencoders for Recommendations

Created by
  • Haebom

Author

Jaewan Moon, Seongmin Park, Jongwuk Lee

Outline

This paper introduces L3AE, a proposed method to overcome the limitations of conventional linear autoencoders (LAEs) that leverage large-scale language models (LLMs) to enrich the semantic representation of text information in recommender systems. Conventional LAEs rely on sparse word co-occurrence patterns, limiting their ability to capture rich textual meaning. L3AE integrates LLMs into the LAE framework to effectively integrate textual meaning and heterogeneous information from user-item interactions. To achieve this, we employ a two-step optimization strategy: constructing a semantic item-item correlation matrix using item representations derived from the LLM, and learning an item-item weight matrix from collaboration signals while utilizing semantic item correlations as regularization. Each step is optimized through a closed-form solution, ensuring global optimality and computational efficiency. Experimental results on three benchmark datasets demonstrate that L3AE consistently outperforms state-of-the-art LLM-enhanced models, achieving performance gains of 27.6% on Recall@20 and 39.3% on NDCG@20. The source code can be found at https://github.com/jaewan7599/L3AE_CIKM2025 .

Takeaways, Limitations

Takeaways:
We present a novel method to effectively integrate LLM into the LAE framework to improve the performance of text-based recommender systems.
Achieving global optimality and computational efficiency simultaneously through a two-step optimization strategy.
Experimentally verified significant performance improvements compared to existing state-of-the-art models.
Ensure reproducibility and extensibility through open source code.
Limitations:
There is a possibility that the performance improvement of the proposed model may be limited to a specific dataset.
Consideration needs to be given to the computational cost and inference time of LLM.
Generalization performance verification is needed for various types of text data and recommendation systems.
👍