Isabelle Levent
@isabellelevent
Isabelle Levent
@isabellelevent
We find that models learn just as fast with many prompts that are intentionally irrelevant or even pathologically misleading as they do with instructively “good” prompts. Further, such patterns hold even for models as large as 175 billion parameters (Brown et al., 2020) as well as the recently proposed instruction-tuned models which are trained on
... See moreHowever, we often found that it was the unexpected differences between the prompt and the generated image’s interpretation of it that yielded new insight for and excitement from participants.

Philosophy and
the reconfiguration of culture as a domain of not just human-made meanings but also machinic calculation
Even when these paragraphs fail, they make her interested in the story again. She’s curious about this computer-generated text, and it reignites her interest in her own writing.

Tech Ethics and
I think that the language model’s failure to dismiss the class results from a slightly different cause than my student’s failure to dismiss the class with the same utterance. While the student’s failure arises from their lack of authority, the model’s failure results from the fact that it functions more like a citation of language rather than a
... See more
art and Human-Centered AI