Yes, LLMs can reason , plan , make judgments under uncertainty , and integrate these skills in a wide variety of domains; but true AGI supposedly rests on the assumption that these systems, given their ability to match or surpass human intelligence , would be capable of going out and actively doing things we’re doing.