This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
ApartSprints
AI Governance Hackathon
Accepted at the 
AI Governance Hackathon
 research sprint on 
July 21, 2023

Measuring Gender Bias in Text-to-Image Models using Object Detection

This work presents a novel strategy to measure bias in text-to-image models. Using paired prompts that specify a gender and vaguely reference an object (e.g. “a man/woman holding an item”) we can examine whether certain objects are associated with a certain gender. We found that male prompts were more prone to generate objects including ties, backpacks, knives, and trucks. Female prompts were more likely to generate objects including handbags, umbrellas, bottles, and cups. We go on to outline a simple framework for regulators looking to measure gender bias in text-to-image models.

By 
Harvey Mannering
🏆 
4th place
3rd place
2nd place
1st place
 by peer review
Thank you! Your submission is under review.
Oops! Something went wrong while submitting the form.

This project is private