[공지사항]을 빙자한 안부와 근황 
Show more

Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

IConMark: Robust Interpretable Concept-Based Watermark For AI Images

Created by
  • Haebom

Author

Vinu Sankar Sadasivan, Mehrdad Saberi, Soheil Feizi

Outline

With the rapid development of generative AI and synthetic media, it has become increasingly important to distinguish AI-generated images from real ones. In this paper, we propose a new watermarking method, IConMark, to overcome the limitations of existing weak watermarking techniques. IConMark embeds interpretable concepts into AI-generated images, which makes it human-interpretable and robust against adversarial manipulation, unlike existing noise or perturbation methods. It is robust against various image augmentation and allows humans to manually verify the watermark. IConMark maintains high detection accuracy and image quality, and can be combined with existing watermarking techniques (StegaStamp, TrustMark) to further enhance the robustness as IConMark+SS and IConMark+TM. Experimental results show that IConMark and its variants have AUROC scores that are 10.8%, 14.5%, and 15.9% higher than those of the existing best-performing techniques, respectively.

Takeaways, Limitations

Takeaways:
Presenting a method for verifying AI-generated images that is robust to adversarial attacks using interpretable watermarking
Achieve higher detection accuracy and image quality than existing watermarking techniques
Suggesting the possibility of improving robustness through combination with existing techniques
Improved reliability by allowing people to manually verify watermarks
Limitations:
IConMark’s performance may depend on the dataset and AI model used. Additional experiments on different datasets and models are needed.
Further research is needed to determine whether it is vulnerable to specific types of attacks.
Generalization performance evaluation for various image manipulations in the real world is needed.
Research is needed on the potential performance degradation depending on the size and complexity of the watermark.
👍