weirdly my main reaction is gratitude to the OpenAI founders for actually creating a governance structure that committed them to sacrifice profits if the mission required it. no idea if that's what happened here, but at least we know the commitment had teeth. Show more
My guess is that watching the keynote would have made the mismatch between OpenAI's mission and the reality of its current focus impossible to ignore. I'm sure I wasn't the only one that cringed during it.
I think the mismatch between mission and reality was impossible to fix.
And if you make a successful alternative App Store, and get, say, 100m people in the EU to install it, you'll owe Apple €50m/year as a "Core Technology Fee".
NO GATEKEEPING HERE, EU! None at all! Full compliance, totally. Pinky promise!!
The future of AI is a topic that deserves careful rational thought, not half-baked opinions and tribalism. I would advise against thinking in terms of something as silly as “factions.”
Now, Sam is one of those unstoppable forces so he may very well soon enlist other backers and leave but… for now it looks like OpenAI split into two, and Microsoft has managed to seize both halves.
"The amount of serendipity that will occur in your life is directly proportional to the degree to which you do something you're passionate about combined with the total number of people to whom this is effectively communicated." https://t.co/KfFpouh7Dm
With the release of Codex, however, we had the first culture clash that was beyond saving: those who really believed in the safety mission were horrified that OAI was releasing a powerful LLM that they weren't 100% sure was safe. The company split, and Anthropic was born.