Isabelle Levent
@isabellelevent
Isabelle Levent
@isabellelevent
There’s another edge case as well; in theory, with the same prompts and the random seed that’s used for generating the images, you could end up with someone else generating the same, or a very similar, image as what you created.
There are a couple reasons why Wordcraft may have struggled with style and voice... Another reason could have been limitations of the underlying model. LaMDA and other similar language models are trained to be most confident on the kind of text they see most often–typically internet data. However, professional creative writers are usually writing
... See moreWe find that models learn just as fast with many prompts that are intentionally irrelevant or even pathologically misleading as they do with instructively “good” prompts. Further, such patterns hold even for models as large as 175 billion parameters (Brown et al., 2020) as well as the recently proposed instruction-tuned models which are trained on
... See moreMany methods for creating these models don't (and to be honest can't) attach the name, website and other details of every image and piece of text used to a create a new image in the metadata to every step of the process.

Philosophy and
However, we often found that it was the unexpected differences between the prompt and the generated image’s interpretation of it that yielded new insight for and excitement from participants.
Like oil and land, data are a common that is commodified by private actors for profits. The commons being commodified is our essence as humans: our interactions with society at large.