Beyond studying human depiction bias in text-to-image generation models, this paper investigates demographic biases in objects themselves (e.g., automobiles). We present a novel framework, Stereotyped Object Diagnostic Audit (SODA), which generates 2,700 images across five object categories using three state-of-the-art models (GPT Image-1, Imagen 4, and Stable Diffusion) and compares the generation results using demographic cues (e.g., "for young people") with those generated using neutral prompts. Our analysis reveals strong associations between specific demographic groups and visual attributes (e.g., recurring color patterns triggered by gender or ethnicity cues). These patterns reflect and reinforce not only well-known stereotypes but also more subtle and counterintuitive biases. Furthermore, we observe that some models produce outputs with low diversity, amplifying visual differences compared to neutral prompts. The proposed audit framework provides a practical way to uncover the biases still inherent in today's generative models, and presents them as an essential step toward more systematic and responsible AI development.