as conversational AI agents become more interactive and personalized, they will surpass human influencers in their ability to shape our decisions without us realizing it.
I’m coming to terms with the high probability that AI will write most of my code which I ship to prod, going forward. It already does it faster, and with similar results to if I’d typed it out. For languages/frameworks I’m less familiar with, it does a better job than me.
Enter Large Language Models (LLMs). The first tranche of products and startups leveraging LLMs has kept within the mental model of selling software to achieve step-function improvements in end-user productivity. The "Copilot for [x]" trend reflects this mental model. While there are fantastic startups innovating to improve employee productivity,... See more
The web is no longer primarily human-generated text created for human readers; it is increasingly AI-generated text created for algorithmic amplification, mixed with human-generated text optimized for the same algorithmic reward functions.
As AI systems train on this degraded corpus, they learn to reproduce its characteristics—shallow reasoning,... See more
the era of AI-induced mental illness is going to make the era of social media-induced mental illness look like the era of. like. printing press-induced mental illness
While LLMs are designed to emulate human-like responses, this does not mean that this analogy extends to the underlying cognition giving rise to those responses
Agrawal et al. argue that the framing of AI automation versus augmentation is wrong. Rather than being distinct they are often one and the same. They say that AI, initially intended for automating tasks, inadvertently acts as a force for augmentation of the broader workforce. For example, automating diagnostic skills in healthcare could diminish... See more