This paper focuses on privacy threats in a large-scale language model (LLM)-based recommender system (RecSys). LLM-based RecSys leverages in-context learning (ICL) to personalize recommendations by incorporating sensitive user past interaction data (e.g., clicked products, product reviews) into system prompts. This sensitive information poses a risk for novel privacy attacks, but research on this topic is lacking. In this paper, we design four membership inference attacks (MIAs)—direct question, hallucination, similarity, and contamination—to determine whether a user's past interaction data has been used in system prompts. We evaluate these attacks using three LLMs and two RecSys benchmark datasets. Our experimental results demonstrate that the direct question and contamination attacks achieve significantly high attack success rates, demonstrating the practicality of MIA threats in LLM RecSys. We also analyze factors that influence the attack, such as the number of shots in the system prompts and the victim's location.