The Goldilocks Zone
The first step is to understand the fundamental difference between humans and AIs. We are analog, chemical beings, with emotions and feelings. Compared with machines, we think slowly—and we act too fast, failing to consider the long-term consequences of our behavior (which AI can help predict). So we should not compete with AI; we should use it. At... See more
Esther Dyson • Don’t Fuss About Training AIs. Train Our Kids
LeCun points to four essential characteristics of human intelligence that current AI systems, including LLMs, can’t replicate: reasoning, planning, persistent memory, and understanding the physical world. He stresses that LLMs’ reliance on textual data severely limits their understanding of reality: “We’re easily fooled into thinking they are intel... See more
Azeem Azhar • 🧠 AI’s $100bn question: The scaling ceiling
MargaretC added
Where and how will humans show the eminence of our species when AI becomes better than us at everything? I feel sort of desperate for some human-centric optimism here because the whole project of being human feels really weird as our specialness becomes automated.
A couple of observations. First, I do think the last mile is the most difficu
... See moreBen Horowitz • Ben’s Perspective Part 2: A Few Thoughts About Chatgpt With Ben Horowitz
sari added
The first step is to understand the fundamental difference between humans and AIs. We are analog, chemical beings, with emotions and feelings. Compared with machines, we think slowly—and we act too fast, failing to consider the long-term consequences of our behavior (which AI can help predict). So we should not compete with AI; we should use it. At... See more
Esther Dyson • Don’t Fuss About Training AIs. Train Our Kids
Britt Gage and added
focus less on AI as something separate from humans, and more on tools that enhance human cognition rather than replacing it… If we want a future that is both superintelligent and "human", one where human beings are not just pets, but actually retain meaningful agency over the world, then it feels like something like this is the most natural option.
We’re far from replacing humans for many tasks now being delegated to large language models. That doesn’t mean LLMs can’t be helpful; it just means they’re being used wrong. The key is applying them to tasks they’re well-suited to, such as analyzing, synthesizing, and manipulating data, and not for making important decisions on behalf of people.
An ... See more
An ... See more
Reconsidering the Role of AI: Valuing Process Over Output
Alex Wittenberg and added