machine learning
When a word has two unrelated meanings, as with bank , linguists call them homonyms. When a word has two closely related meanings, as with magazine , linguists call it polysemy.
Timothy B Lee • Large language models, explained with a minimum of math and jargon
Poly-semy → poly-semantic → multiple, related meanings; homonyms - same word, different meanings
Because these vectors are built from the way humans use words, they end up reflecting many of the biases that are present in human language. For example, in some word vector models, doctor minus man plus woman yields nurse . Mitigating biases like this is an area of active research.
Timothy B Lee • Large language models, explained with a minimum of math and jargon
If the biases in the machine are driven by biases in culture/the population, it seems then that the biases will continue to exist, regardless of what we do to mitigate them in the machine? Seems better to get at the source, but that feels like an impossible task. Maybe easier to mitigate then… how to mitigate, and by whose standard?
Ideas related to this collection