This page organizes papers related to artificial intelligence published around the world. This page is summarized using Google Gemini and is operated on a non-profit basis. The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.
Causal-Adapter: Taming Text-to-Image Diffusion for Faithful Counterfactual Generation
Created by
Haebom
Author
Lei Tong, Zhihua Liu, Chaochao Lu, Dino Oglic, Tom Diethe, Philip Teare, Sotirios A. Tsaftaris, Chen Jin
Outline
Causal-Adapter is a modular framework for generating counterfactual images using a text-to-image diffusion backbone. It enables causal intervention on target attributes without altering their core identity, consistently propagating their influence to causal dependencies. Unlike existing approaches that rely on prompt engineering, Causal-Adapter leverages structural causal modeling. It also employs two attribute regularization strategies: prompt alignment injection, which aligns causal attributes with text embeddings for precise semantic control, and conditional token contrastive loss, which separates attribute elements and reduces spurious correlations.
Takeaways, Limitations
•
Takeaways:
◦
It achieves up to 91% MAE reduction for precise attribute control, enabling generalizable counterfactual editing with faithful attribute modification and strong identity preservation.
◦
It demonstrates state-of-the-art performance on synthetic and real-world datasets, achieving 87% FID reduction on ADNI for high-quality MRI image generation.
•
Limitations:
◦
There is no direct mention of Limitations within the paper.