Isabelle Levent
@isabellelevent
Isabelle Levent
@isabellelevent
The fact that adding keywords like Let’s Think Step By Step , adding “Greg Rutkowski”, prompt weights, and even negative prompting are still so enormously effective, is a sign that we are nowhere close to perfecting the “language” part of “large language models”.
Second, we should create a legal regime that can make our data’s collective value something we can bargain over as a group.
They will shade our constant submissions to the vast digital commons, intentional or consensual or mandatory, with the knowledge that every selfie or fragment of text is destined to become a piece of general-purpose training data for the attempted automation of everything. They will be used on people in extremely creative ways, with and without
... See more“There are real concerns with respect to the copyright of outputs from these models and unaddressed rights issues with respect to the imagery, the image metadata and those individuals contained within the imagery,” said Peters.
They thought it would be particularly useful for writing in a certain voice or character, or for coming up with thematically exciting words. They wondered what kind of thesaurus would come from a corpus of nautical novels (like Moby Dick)
The deepest cases of creativity involve someone’s thinking something which, with respect to the conceptual spaces in their minds, they couldn’t have thought before. The supposedly impossible idea can come about only if the creator changes the pre-existing style in some way. It must be tweaked, or even radically transformed, so that thoughts are
... See moreThere’s another edge case as well; in theory, with the same prompts and the random seed that’s used for generating the images, you could end up with someone else generating the same, or a very similar, image as what you created.
We find that models learn just as fast with many prompts that are intentionally irrelevant or even pathologically misleading as they do with instructively “good” prompts. Further, such patterns hold even for models as large as 175 billion parameters (Brown et al., 2020) as well as the recently proposed instruction-tuned models which are trained on
... See moreMuch of the discussion this year is about text-to-image, but I believe this is a temporary stage; these things are going to continue evolving very quickly.