OpenAI's AI reasoning model 'thinks' in Chinese sometimes and no one really knows why | TechCrunch
techcrunch.com
OpenAI's AI reasoning model 'thinks' in Chinese sometimes and no one really knows why | TechCrunch
Specific abilities are very hard to predict. Back when I was working on GPT-2 and GPT-3, we were asking, “When does arithmetic come into place? When do models learn to code?” Sometimes it’s very abrupt. It’s like how you can predict statistical averages of the weather, but the weather on one particular day is very hard to predict. One of the first
... See morethere is no way to detect whether or not a piece of text is AI-generated. A couple of rounds of prompting remove the ability of any detection system to identify AI writing.
What this means, in part, is that LLMs never know a fact or understand a concept in the way that we do. Instead, every time you prompt an LLM with a question, or ask it to take some action, you are simply asking it to make a prediction about what tokens are most likely to follow the tokens that comprise your prompt in a contextually relevant way. A
... See more