Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

DPImageBench: A Unified Benchmark for Differentially Private Image Synthesis

Created by
  • Haebom

Author

Chen Gong, Kecen Li, Zinan Lin, Tianhao Wang

Outline

This paper addresses the problem of inconsistent and sometimes flawed evaluation protocols in the field of differential privacy (DP) image synthesis and proposes DPImageBench, a standardized evaluation benchmark for DP image synthesis. DPImageBench systematically evaluates 11 major methodologies, nine datasets, and seven fidelity and usability metrics. Specifically, we find that the common practice of selecting the subclassifier that achieves the highest accuracy on a sensitive test set violates DP and overestimates the usability score, and we correct this. Furthermore, we demonstrate that pretraining on public image datasets is not always beneficial, and that distributional similarity between pretraining and sensitive images significantly impacts the performance of synthesized images. Finally, we find that adding noise to low-dimensional features (e.g., high-dimensional features of sensitive images) rather than high-dimensional features (e.g., weight gradients) is less sensitive to privacy budgets and yields better performance under low privacy budgets.

Takeaways, Limitations

Takeaways:
DPImageBench is presented for standardization and benchmarking of evaluation protocols in the field of DP image synthesis.
We reveal that the distributional similarity between the pretraining dataset and the sensitive images has a significant impact on the performance of DP image synthesis.
We suggest that adding noise to low-dimensional features is more effective under low privacy budgets than adding noise to high-dimensional features.
Point out the problems of the existing DP-violating evaluation method and suggest an improved evaluation method.
Limitations:
Further review is needed of the comprehensiveness of the methodology, datasets, and metrics included in DPImageBench.
As new DP image synthesis methodologies emerge, DPImageBench requires continuous updates and maintenance.
Further research is needed on the generalization performance of DPImageBench in real-world application environments.
👍