Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Realizing Scaling Laws in Recommender Systems: A Foundation-Expert Paradigm for Hyperscale Model Deployment

Created by
  • Haebom

Author

Dai Li, Kevin Course, Wei Li, Hongwei Li, Jie Hua, Yiqi Chen, Zhao Zhu, Rui Jian, Xuan Cao, Bi Xue, Yu Shi, Jing Qian, Kai Ren, Matt Ma, Qunshu Zhang, Rui Li

Outline

This paper proposes the Foundation-Expert paradigm to address the key challenge of efficiently deploying large-scale models in recommender systems. Unlike conventional single-model approaches, this paper proposes a method that trains lightweight models (Expert Models) specialized for each surface based on a central recommendation model (Foundation Model) trained on data from various surfaces and modalities. The central model learns generalized knowledge, and the surface-specific models efficiently transfer knowledge from the central model through goal-oriented embedding, adapting to the data distribution and optimization goals of each surface. To achieve this, Meta built a production environment system called HyperCast, redesigning the learning, service, logging, and iteration processes. Actual deployment results show improved online metrics, faster development speed, and maintained infrastructure efficiency compared to existing systems. This paper presents a successful deployment case of this large-scale recommender system and provides a proven blueprint for realizing the promise of scaling laws in recommender systems.

Takeaways, Limitations

Takeaways:
Validating the effectiveness of the Foundation-Expert paradigm for efficient deployment of large-scale recommendation systems in a real-world environment.
Simultaneously achieve improved online metrics, faster development, and improved infrastructure efficiency.
Demonstrates generalized knowledge learning and transferability across diverse recommendation surfaces and modalities.
Providing a practical blueprint for developing and deploying large-scale recommender systems.
Limitations:
Lack of information on the specific architecture and implementation details of the HyperCast system.
Lack of sufficient analysis of the limitations and constraints of generalization performance across various recommendation surfaces and modalities.
Lack of detailed description of the interaction and knowledge transfer mechanisms between the Foundation and Expert models.
Lack of comparative analysis with other large-scale recommendation systems.
👍