AI Doomers Had Their Big Moment
The idea is that humans will always remain in command. Essentially, it’s about setting boundaries, limits that an AI can’t cross. And ensuring that those boundaries create provable safety all the way from the actual code to the way it interacts with other AIs—or with humans—to the motivations and incentives of the companies creating the technology.... See more
Will Douglas Heaven • DeepMind’s Cofounder: Generative AI Is Just a Phase. What’s Next Is Interactive AI.
And in the end, OpenAI doesn’t matter . They are making the same mistakes we are in their posture relative to open source, and their ability to maintain an edge is necessarily in question. Open source alternatives can and will eventually eclipse them unless they change their stance. In this respect, at least, we can make the first move.
Simon Willison • Leaked Google document: “We Have No Moat, And Neither Does OpenAI”
Kokotajlo: I do agree that humanity is much better at regulating against problems that have already happened when we learn from harsh experience. Part of why the situation that we’re in is so scary is that for this particular problem, by the time it’s already happened, it’s too late.
Smaller versions of it can happen, though. For example, the stuff ... See more
Smaller versions of it can happen, though. For example, the stuff ... See more
Opinion | The Forecast for 2027? Total A.I. Domination.
The tendency to think of A.I. as a magical problem solver is indicative of a desire to avoid the hard work that building a better world requires. That hard work will involve things like addressing wealth inequality and taming capitalism. For technologists, the hardest work of all—the task that they most want to avoid—will be questioning the assumpt
... See more