This page organizes papers related to artificial intelligence published around the world. This page is summarized using Google Gemini and is operated on a non-profit basis. The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.
Safe-Control: A Safety Patch for Mitigating Unsafe Content in Text-to-Image Generation Models
Created by
Haebom
Author
Xiangtao Meng, Yingkai Dong, Ning Yu, Li Wang, Zheng Li, Shanqing Guo
Outline
This paper introduces Safe-Control, a plug-and-play safety patch developed to address the safety issues of text-to-image (T2I) generation models. Safe-Control leverages data-driven strategies and safety-aware conditions to inject safety control signals into locked T2I models, reducing the generation of unsafe content. It is applicable to a variety of T2I models and overcomes the limitations of existing safety mechanisms.
Takeaways, Limitations
•
Takeaways:
◦
Development of a plug-and-play safety patch that can be easily applied to various T2I models.
◦
Effectively reduces the likelihood of unsafe content creation (7%)
◦
Superior performance over existing safety mechanisms
◦
A single patch can address multiple security requirements.
•
Limitations:
◦
Limitations is not mentioned in the paper (based on limited information)