Isabelle Levent
@isabellelevent
Isabelle Levent
@isabellelevent

Our intuitive moral understanding of actors and transgressions may be at odds with the inherent complexity of AI systems.
There are a couple reasons why Wordcraft may have struggled with style and voice... Another reason could have been limitations of the underlying model. LaMDA and other similar language models are trained to be most confident on the kind of text they see most often–typically internet data. However, professional creative writers are usually writing
... See moreThis builds on a growing body of work that our ‘‘mind perception’’ (which manifests as inferences of intentions, beliefs, and values) meaningfully varies across individuals and shapes our moral judgments
For a computer to make a subtle combinational joke, never mind to assess its tastefulness, would require, first, a data-base with a richness comparable to ours, and, second, methods of link-making (and link-evaluating) comparable in subtlety with ours.

The Lab’s primary focus is on the ways in which artists and designers are adopting, adapting and remaking AI processes, building their own datasets and reaching into the ‘grey box’ of AI technologies.
Many methods for creating these models don't (and to be honest can't) attach the name, website and other details of every image and piece of text used to a create a new image in the metadata to every step of the process.
With so much focus on creation, few systems consider revision. Revision—this is where the average writer gets the most outside help.