Isabelle Levent
@isabellelevent
Isabelle Levent
@isabellelevent
I think that the language model’s failure to dismiss the class results from a slightly different cause than my student’s failure to dismiss the class with the same utterance. While the student’s failure arises from their lack of authority, the model’s failure results from the fact that it functions more like a citation of language rather than a
... See moreIf it was possible to deduce how much of an influence each individual image has on the final outcome (and the owner of each image was known and labelled, which I currently doubt happens), would it be simple to compensate people then?
My lesson from these two examples is that it might be possible to make prompting “invisible” by making it part of the UI, and finetuning output for as much of the writer’s context as possible to make it more useful. Latency matters, and cost matters, which are wonderful because these tend to be “regular engineering” type problems rather than AI
... See moreMany methods for creating these models don't (and to be honest can't) attach the name, website and other details of every image and piece of text used to a create a new image in the metadata to every step of the process.
In many instances, to say that some technologies are inherently political is to say that certain widely accepted reasons of practical necessity–especially the need to maintain critical technological systems as smoothly working entities–have tended to eclipse other sorts of moral and political reasoning.
Our intuitive moral understanding of actors and transgressions may be at odds with the inherent complexity of AI systems.
They thought it would be particularly useful for writing in a certain voice or character, or for coming up with thematically exciting words. They wondered what kind of thesaurus would come from a corpus of nautical novels (like Moby Dick)
Tech Ethics and