
Saved by Darren LI and
Large language models, explained with a minimum of math and jargon
Saved by Darren LI and
Poly-semy → poly-semantic → multiple, related meanings; homonyms - same word, different meanings
If the biases in the machine are driven by biases in culture/the population, it seems then that the biases will continue to exist, regardless of what we do to mitigate them in the machine? Seems better to get at the source, but that feels like an impossible task. Maybe easier to mitigate then… how to mitigate, and by whose standard?