This paper analyzes the evolution of generative recommendation (GR) systems and their Limitations, and proposes a novel approach to leveraging large-scale language models (LLMs) in recommender systems. Specifically, we point out the scaling limitations of existing Semantic ID (SID)-based GR models and experimentally demonstrate that an LLM-as-RS approach that directly utilizes LLMs exhibits better scaling performance. LLM-as-RS achieves up to 20% performance improvement, demonstrating that LLMs can effectively model user-item interactions.