This paper addresses the privacy, copyright, and data management risks of generative models such as recently developed diffusion models. Unlike previous works that focused on image reconstruction techniques that require high-performance resources and access to training data, this paper presents a novel attack technique that identifies seemingly innocuous prompts that can lead to dangerous image reconstructions without access to training data and with low resources. In particular, by exploiting the basic vulnerabilities of prompts such as template layouts, images, and patterns derived from scraped data of e-commerce platforms, we show a case where a prompt such as "blue unisex t-shirt" generates a real person's face. This highlights the risk of unintentional image reconstruction even by unknowing users.