Isabelle Levent
@isabellelevent
Isabelle Levent
@isabellelevent
on metaphors for LLMs
With so much focus on creation, few systems consider revision. Revision—this is where the average writer gets the most outside help.
There are a couple reasons why Wordcraft may have struggled with style and voice... Another reason could have been limitations of the underlying model. LaMDA and other similar language models are trained to be most confident on the kind of text they see most often–typically internet data. However, professional creative writers are usually writing
... See more
Tech Ethics and
Many methods for creating these models don't (and to be honest can't) attach the name, website and other details of every image and piece of text used to a create a new image in the metadata to every step of the process.
Many people don't consider that when they use the internet, be that making a simple HTML/CSS site, or using a site through a big conglomerate, scrapers are scraping and crawlers are crawling the content unless you've specifically configured robots.txt and no-index rules to prevent it.
Instead, I’d like us to ask: in whose voice do our machines write? What voices do they obfuscate? Where do their words come from? In short, I’d like us to ask questions about power, and the ways in which it functions through and around language.
Now none of this is meant to say that I think programmers, artists and engineers have no responsibilities when it comes to the outputs of machine learning models. In fact, I think we bear responsibility for everything these models do. (I never, for example, attribute authorship to a program or a model. If I publish the results of a text generator,
... See moreThe fact that adding keywords like Let’s Think Step By Step , adding “Greg Rutkowski”, prompt weights, and even negative prompting are still so enormously effective, is a sign that we are nowhere close to perfecting the “language” part of “large language models”.