I also don't really see AGI emerging from LLMs any time soon, but it could be argued that human intelligence is also just 'fancy autocomplete'.
But that's my point - in some ways it's obvious that humans are not just doing "fancy autocomplete" because humans generally don't make the types of hallucination errors that LLMs make. That is, the hallucination errors do make sense if you think of how an LLM is just a statistical relationship between tokens.
One thing to emphasize, I'm not saying the "understanding" that humans seem to possess isn't just some lower level statistical process - I'm not "invoking a soul". But I am saying it appears to be fundamentally different, and in many cases more useful, than what an LLM can do.
Well, no. Humans do not think sequentially. But even if we were to put that aside, any "autocomplete" we perform is based on a world model, and not tokens in a string.