Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

When Cars Have Stereotypes: Auditing Demographic Bias in Objects from Text-to-Image Models

Created by
  • Haebom

Author

Dasol Choi, Jihwan Lee, Minjae Lee, Minsuk Kahng

Outline

Beyond studying human depiction bias in text-to-image generation models, this paper investigates demographic biases in objects themselves (e.g., automobiles). We present a novel framework, Stereotyped Object Diagnostic Audit (SODA), which generates 2,700 images across five object categories using three state-of-the-art models (GPT Image-1, Imagen 4, and Stable Diffusion) and compares the generation results using demographic cues (e.g., "for young people") with those generated using neutral prompts. Our analysis reveals strong associations between specific demographic groups and visual attributes (e.g., recurring color patterns triggered by gender or ethnicity cues). These patterns reflect and reinforce not only well-known stereotypes but also more subtle and counterintuitive biases. Furthermore, we observe that some models produce outputs with low diversity, amplifying visual differences compared to neutral prompts. The proposed audit framework provides a practical way to uncover the biases still inherent in today's generative models, and presents them as an essential step toward more systematic and responsible AI development.

Takeaways, Limitations

Takeaways:
We reveal the presence of demographic biases toward objects in text-to-image generation models.
These biases reflect and reinforce stereotypes, and they also show that they include subtle and counterintuitive biases.
Presenting a practical method for systematically measuring and evaluating the bias of generative models through the SODA framework.
Presenting an important step toward more responsible AI development.
Limitations:
Generalizability may be limited due to limitations in the model and dataset used in the analysis.
Further validation of the objectivity and reliability of the SODA framework is needed.
Lack of in-depth analysis of the root causes of bias and solutions.
👍