Saved by Simon Joliveau Breney
Fwd: AI Alignment Is Censorship
Information controls introduced now will also snowball into future generations of LLMs. Since new generations of LLMs are reliant on synthetic data generated from previous generations of LLMs, they inherit any model-layer information controls, and it will be harder to reverse any censorship or information manipulation that is baked into earlier mod
... See moreSimon Joliveau • Fwd: AI Alignment Is Censorship
Unlike social media platforms, LLM producers argue that they are producing content rather than hosting it. They do this in order to avoid copyright claims. As a result, under many legal jurisdictions, the LLM producer might be held more liable for their models’ outputs than social media platforms would be for the content created by their users.
Simon Joliveau • Fwd: AI Alignment Is Censorship
Depending on what information is censored and who you ask, censorship can take many names: content policy, moderation, content standards, safety measures, anti-disinformation, combating fake news, silencing dissent. Drawing the boundaries of acceptable speech is a political question: censorship isn’t always inherently objectionable. The underlying
... See moreSimon Joliveau • Fwd: AI Alignment Is Censorship
In the coming years, as LLM producers consolidate their power, framing alignment as censorship is critical so we can hold producers, and by proxy, the states that influence them, accountable.