Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Low Resource Reconstruction Attacks Through Benign Prompts

Created by
  • Haebom

Author

Sol Yarkoni, Roi Livni

Outline

This paper addresses the privacy, copyright, and data management risks of generative models such as recently developed diffusion models. Unlike previous works that focused on image reconstruction techniques that require high-performance resources and access to training data, this paper presents a novel attack technique that identifies seemingly innocuous prompts that can lead to dangerous image reconstructions without access to training data and with low resources. In particular, by exploiting the basic vulnerabilities of prompts such as template layouts, images, and patterns derived from scraped data of e-commerce platforms, we show a case where a prompt such as "blue unisex t-shirt" generates a real person's face. This highlights the risk of unintentional image reconstruction even by unknowing users.

Takeaways, Limitations

Takeaways: Raises awareness of the security and privacy threats of generative models by demonstrating that it is possible to reconstruct images from the training data of generative models without access to training data and with low resources. Highlights that seemingly innocuous prompts can lead to dangerous consequences. Highlights the risks of using e-commerce platform data.
Limitations: Further research is needed on the generalizability of the proposed attack technique and its applicability to various generative models. It may be limited to the analysis results for specific models and datasets. It may have limitations in generalizing the analysis results to specific examples such as "blue unisex t-shirt".
👍