Revealing Gender Bias from Prompt to Image in Stable Diffusion
Revealing Gender Bias from Prompt to Image in Stable Diffusion
Blog Article
Social biases in generative models have gained increasing attention.This paper proposes an automatic evaluation protocol for text-to-image generation, examining how gender bias originates apunisw2 and perpetuates in the generation process of Stable Diffusion.Using triplet prompts that vary by gender indicators, we trace presentations at several stages of the generation process and explore dependencies between prompts and images.
Our findings reveal the bias persists throughout all internal stages of the generating process and manifests in the entire images.For instance, differences in object presence, such as different instruments and outfit preferences, are observed across genders and extend to overall image layouts.Moreover, our experiments demonstrate that neutral prompts tend to produce images more closely aligned with those from masculine laufenn 275/55r20 prompts than with their female counterparts.
We also investigate prompt-image dependencies to further understand how bias is embedded in the generated content.Finally, we offer recommendations for developers and users to mitigate this effect in text-to-image generation.