This paper reveals that large-scale language model (LLM)-based recommender systems overcome the limitations of conventional recommender systems, but are vulnerable to inversion attacks that can compromise system and user privacy. We conduct the first systematic study of inversion attacks on LLM-based recommender systems, analyzing attackers' attempts to reconstruct the original prompts, including personal preferences, interaction history, and demographic attributes, using the output logit of the recommender model. We reproduce the vec2text framework and propose a novel method, "Similarity Guided Refinement," that enables accurate reconstruction of text prompts. Extensive experiments on two representative LLM-based recommender models from the movie and book domains demonstrate that our proposed system can recover approximately 65% of user interactions and accurately infer age and gender in 87% of cases. Furthermore, we demonstrate that privacy leakage is largely independent of the performance of the compromised model, but is highly dependent on domain consistency and prompt complexity. These results reveal a serious privacy vulnerability in LLM-based recommender systems.