Isabelle Levent
@isabellelevent
Isabelle Levent
@isabellelevent
Academically, this is a collision of everything from computer science and art history to media studies to disruptive innovation to labor economics, and no one of these disciplines seems sufficient to cover the topic.
Now none of this is meant to say that I think programmers, artists and engineers have no responsibilities when it comes to the outputs of machine learning models. In fact, I think we bear responsibility for everything these models do. (I never, for example, attribute authorship to a program or a model. If I publish the results of a text generator,
... See moreA couple participants found success using the chatbot as a convenient search engine alternative (KL, WT). KL wrote: “It’s kind of great to use the chat interface and treat LaMDA as a thesaurus, quote finder, and general research assistant.”
Many methods for creating these models don't (and to be honest can't) attach the name, website and other details of every image and piece of text used to a create a new image in the metadata to every step of the process.
OpenAI, which has been accused by its peers of releasing tools to the public with reckless speed, is particularly good at designing interfaces for its models that feel like magic. “It’s a conscious design imperative to produce these moments of shock and awe,” Crawford says. “We’re going to keep having those moments of enchantment.”
Instead, I’d like us to ask: in whose voice do our machines write? What voices do they obfuscate? Where do their words come from? In short, I’d like us to ask questions about power, and the ways in which it functions through and around language.
My lesson from these two examples is that it might be possible to make prompting “invisible” by making it part of the UI, and finetuning output for as much of the writer’s context as possible to make it more useful. Latency matters, and cost matters, which are wonderful because these tend to be “regular engineering” type problems rather than AI
... See moreMany people don't consider that when they use the internet, be that making a simple HTML/CSS site, or using a site through a big conglomerate, scrapers are scraping and crawlers are crawling the content unless you've specifically configured robots.txt and no-index rules to prevent it.
Crawford elaborated in an interview. “When you have this enchanted determinism, you say, we can’t possibly understand this. And we can’t possibly regulate it when it’s clearly so unknown and such a black box,” she says. “And that’s a trap.”