Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Enhancing Sequential Recommender with Large Language Models for Joint Video and Comment Recommendation

Created by
  • Haebom

Author

Bowen Zheng, Zihan Lin, Enze Liu, Chen Yang, Enyang Bai, Cheng Ling, Wayne Xin Zhao, Ji-Rong Wen

Outline

In this paper, we propose a novel recommender system, LSVCR, that considers both video and comment interactions in online video platforms. LSVCR uses the sequential recommendation (SR) model as the main recommendation backbone, and an additional large-scale language model (LLM) as a secondary recommender. To integrate the strengths of both models, we propose a two-step training process: personalized preference sorting and recommendation-oriented fine-tuning. Experimental results demonstrate the effectiveness of LSVCR on both video and comment recommendation tasks, and A/B tests on the KuaiShou platform show that it improves comment viewing time by 4.13%. LLM is excluded in the deployment phase.

Takeaways, Limitations

Takeaways:
We demonstrate that incorporating video and comment interactions can help us model user preferences more accurately.
We present an effective method to improve recommendation performance by combining sequential recommendation models and LLM.
Verification of practical effectiveness through A/B testing on an actual platform (KuaiShou).
Clear results were achieved in the form of increased comment viewing time.
Limitations:
By using the LLM as an adjunct and excluding it from the distribution phase, you may not be able to fully exploit the potential of the LLM.
Since it was developed to fit the characteristics of the KuaiShou platform, further research is needed on its generalizability to other platforms.
Potential increased computational costs and decreased efficiency due to the use of LLM.
👍