Scientists have long been developing machines that attempt to imitate the human brain. Just as humans are exposed to systemic injustices, machines learn human-like stereotypes and cultural norms from sociocultural data, acquiring biases and associations in the process. Our research shows that bias is not only reflected in the patterns of language, ... See more
Managing the risks of inevitably biased visual artificial intelligence systems
OSF
osf.io**Note:** These concerns are the same as in all applications of AI: bias, privacy, interpretability
Ingrid K. Williams • Can A.I.-Driven Voice Analysis Help Identify Mental Disorders? (Published 2022)
Yet the human flaws may become even more dramatic in an algorithmic ecosystem when the actions of mass audiences dictate what can easily be seen. Racism, sexism, and other forms of bias are a de facto part of that equation.
Kyle Chayka • Filterworld
On the Internet and in our everyday uses of technology, discrimination is also embedded in computer code and, increasingly, in artificial intelligence technologies that we are reliant on, by choice or not. I believe that artificial intelligence will become a major human rights issue in the twenty-first century. We are only beginning to understand t
... See moreSafiya Umoja Noble • Algorithms of Oppression
🚨BREAKING: US @NIST publishes 1st draft of its "AI Risk Management Framework: Generative AI Profile." Important information & quotes:
➡️This is a comprehensive document that contains an overview of risks unique to or exacerbated by generative AI (GAI) and an extensive list of actions to manage GAI's risks.
➡️ It highlights the following risks:
➵... See more