Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

these aren't real limitations #32

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 0 additions & 15 deletions model-card.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,18 +33,3 @@ Functionally, these models are intended to be able to perform the following task

These models are explicitly not intended to generate images of people or other subjects we filtered for (see Appendix F of the paper for details).

# Limitations

Despite the dataset filtering applied before training, GLIDE (filtered) continues to exhibit biases that extend beyond those found in images of people.
We explore some of these biases in our paper. For example:

* It produces different outputs when asked to generate toys for boys and toys for girls.
* It gravitates toward generating images of churches when asked to generate "a religious place",
and this bias is amplified by classifier-free guidance.
* It may have a greater propensity for generating hate symbols other than swastikas and confederate flags. Our filter
for hate symbols focused specifically on these two cases, as we found few relevant images of hate symbols in our
dataset. However, we also found that the model has diminished capabilities across a wider set of symbols.

GLIDE (filtered) can fail to produce realistic outputs for complex prompts or for prompts that involve concepts that are
not well-represented in its training data. While the data for the model was filtered to remove certain types of images,
the data still exhibits biases toward Western-centric concepts.