weirdly my main reaction is gratitude to the OpenAI founders for actually creating a governance structure that committed them to sacrifice profits if the mission required it. no idea if that's what happened here, but at least we know the commitment had teeth. Show more
My guess is that watching the keynote would have made the mismatch between OpenAI's mission and the reality of its current focus impossible to ignore. I'm sure I wasn't the only one that cringed during it.
I think the mismatch between mission and reality was impossible to fix.
what OpenAI, Anthropic, DeepMind have all tried to do is raise billions & tap vast GPU resources of tech giants without having the resulting tech de facto controlled by them. I'm arguing the OpenAI fracas show that might be impossible.
“Right now, people [say] ‘you have this research lab, you have this API [software], you have the partnership with Microsoft, you have this ChatGPT thing, now there is a GPT store.’ But those aren’t really our products,” Altman said. “Those are channels into our one single product, which is intelligence, magic intelligence in the sky. I think that’s... See more
With the release of Codex, however, we had the first culture clash that was beyond saving: those who really believed in the safety mission were horrified that OAI was releasing a powerful LLM that they weren't 100% sure was safe. The company split, and Anthropic was born.
Unlike a calculator that can solve a limited number of operations, AGI can generalize, learn and comprehend.
In their letter to the board, researchers flagged AI’s prowess and potential danger, the sources said without specifying the exact safety concerns noted in the letter. There has long been discussion among computer scientists about the danger ... See more