updated 3mo ago
- As noted above, not only the model but also the manner in which it is deployed and in which potential harms are measured and mitigated have the potential to create harmful bias, and a particularly concerning example of this arises in DALL·E 2 Preview in the context of pre-training data filtering and post-training content filter use, which can resul... See more
from dalle-2-preview/system-card.md at main · openai/dalle-2-preview
Kasper Jordaens added
- Scenario number one is a disparity in economic power, in which the folks with the data and the algorithms have—and add all of—the economic value, and the rest of the workforce adds little or none.
That scenario could create an awful social disruption.from The Great Decoupling
Hendrik added
A quote from 2014, even more relevant today
- Scientists have long been developing machines that attempt to imitate the human brain. Just as humans are exposed to systemic injustices, machines learn human-like stereotypes and cultural norms from sociocultural data, acquiring biases and associations in the process. Our research shows that bias is not only reflected in the patterns of language, ... See more
from Managing the risks of inevitably biased visual artificial intelligence systems
Laura Pike Seeley added