Daily Arxiv

This page organizes papers related to artificial intelligence published around the world.
This page is summarized using Google Gemini and is operated on a non-profit basis.
The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.

Understanding Generative Recommendation with Semantic IDs from a Model-scaling View

Created by
  • Haebom

Author

Jingzhe Liu, Liam Collins, Jiliang Tang, Tong Zhao, Neil Shah, Clark Mingxuan Ju

Outline

This paper analyzes the evolution of generative recommendation (GR) systems and their Limitations, and proposes a novel approach to leveraging large-scale language models (LLMs) in recommender systems. Specifically, we point out the scaling limitations of existing Semantic ID (SID)-based GR models and experimentally demonstrate that an LLM-as-RS approach that directly utilizes LLMs exhibits better scaling performance. LLM-as-RS achieves up to 20% performance improvement, demonstrating that LLMs can effectively model user-item interactions.

Takeaways, Limitations

Takeaways:
SID-based GR models have limited performance improvements as model size increases.
The LLM-as-RS approach shows better scaling performance.
LLM can effectively learn collaborative filtering information.
The LLM-as-RS offers groundbreaking advancements in the field of GR.
Limitations:
We specifically analyze the scaling limitations of the SID-based GR model.
We focus on the potential of the LLM-as-RS approach and require further research to improve its performance.
We analyze the scaling effect by comparing performance according to model size (44M to 14B).
👍