Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Membership Inference Attacks on LLM-based Recommender Systems

Created by
  • Haebom

Author

Jiajie He, Yuechun Gu, Min-Chun Chen, Keke Chen

Outline

This paper focuses on privacy threats in a large-scale language model (LLM)-based recommender system (RecSys). LLM-based RecSys leverages in-context learning (ICL) to personalize recommendations by incorporating sensitive user past interaction data (e.g., clicked products, product reviews) into system prompts. This sensitive information poses a risk for novel privacy attacks, but research on this topic is lacking. In this paper, we design four membership inference attacks (MIAs)—direct question, hallucination, similarity, and contamination—to determine whether a user's past interaction data has been used in system prompts. We evaluate these attacks using three LLMs and two RecSys benchmark datasets. Our experimental results demonstrate that the direct question and contamination attacks achieve significantly high attack success rates, demonstrating the practicality of MIA threats in LLM RecSys. We also analyze factors that influence the attack, such as the number of shots in the system prompts and the victim's location.

Takeaways, Limitations

Takeaways: This paper demonstrates the realism of privacy threats in LLM-based RecSys and highlights the importance of privacy protection in future development of LLM-based RecSys by demonstrating the high effectiveness of direct questioning and contamination attacks. Furthermore, it analyzes the factors influencing these attacks and suggests directions for future defense techniques.
Limitations: Since the evaluation was conducted using a limited number of LLMs and datasets, further research on a wider range of LLMs and datasets is needed. Furthermore, in addition to the proposed MIA attack, other types of privacy attacks should be considered. Further research is needed to assess the attack success rates and effectiveness of defense techniques in real-world service environments.
👍