Isabelle Levent
@isabellelevent
Isabelle Levent
@isabellelevent
Our intuitive moral understanding of actors and transgressions may be at odds with the inherent complexity of AI systems.
This builds on a growing body of work that our ‘‘mind perception’’ (which manifests as inferences of intentions, beliefs, and values) meaningfully varies across individuals and shapes our moral judgments
Much of the discussion this year is about text-to-image, but I believe this is a temporary stage; these things are going to continue evolving very quickly.
Many people don't consider that when they use the internet, be that making a simple HTML/CSS site, or using a site through a big conglomerate, scrapers are scraping and crawlers are crawling the content unless you've specifically configured robots.txt and no-index rules to prevent it.
The fact that adding keywords like Let’s Think Step By Step , adding “Greg Rutkowski”, prompt weights, and even negative prompting are still so enormously effective, is a sign that we are nowhere close to perfecting the “language” part of “large language models”.
A recurring theme in participant feedback was that the language model lacked taste and intentionality...In contrast, good writers are skilled not only in producing but also discerning good language. In other words, they have taste, the ability to decide why one sentence is interesting while another is not.
Academically, this is a collision of everything from computer science and art history to media studies to disruptive innovation to labor economics, and no one of these disciplines seems sufficient to cover the topic.
Prompting and
Now none of this is meant to say that I think programmers, artists and engineers have no responsibilities when it comes to the outputs of machine learning models. In fact, I think we bear responsibility for everything these models do. (I never, for example, attribute authorship to a program or a model. If I publish the results of a text generator,
... See more