How AI reduces the world to stereotypes
andrea and added
Scientists have long been developing machines that attempt to imitate the human brain. Just as humans are exposed to systemic injustices, machines learn human-like stereotypes and cultural norms from sociocultural data, acquiring biases and associations in the process. Our research shows that bias is not only reflected in the patterns of language, ... See more
Managing the risks of inevitably biased visual artificial intelligence systems
Laura Pike Seeley added
The problem may even get worse. Generative AI is producing vast amounts of questionable content that contaminates the datasets on which future AIs will be trained.
Joe Smith • The Optimized Marketer: Writing with AI: Future-proof Your Talent and Position Your Business for a World Transformed by AI (The Optimized Self)
To the extent that large language models (and I should note that while I’m focusing on image generation, there are a whole host of companies working on text output as well) are dependent not on carefully curated data, but rather on the Internet itself, is the extent to which AI will be democratized, for better or worse.
Ben Thompson • The AI Unbundling
sari added
In a digitally decentralized world where generative AI can lower the barriers of creation even farther than ever before, and bots can be trained under the perspective of your choosing, an already fractured reality becomes at risk for disintegration. Or, as science-fiction writer and professional futurist Madeline Ashby told us in a recent interview... See more
Our Centaur Future - A RADAR Report
Keely Adler added
Outside of film, people tell credible dystopian AI narratives around entrenching wokeism more deeply, or worse, replacing human work. AI can be a symbiotic partner to human creativity and thought, but the optimistic story needs to be shared.
AI’s positive mission is not as crystallized as crypto’s, or at least not yet.
Unclear fo
... See moreJohn Luttig • Is AI the new crypto?
sari added
Isabelle Levent and added
Managing the risks of inevitably biased visual artificial intelligence systems
brookings.eduLaura Pike Seeley added