Hacker News new | past | comments | ask | show | jobs | submit
> but it could be argued that human intelligence is also just 'fancy autocomplete'.

But that's my point - in some ways it's obvious that humans are not just doing "fancy autocomplete" because humans generally don't make the types of hallucination errors that LLMs make. That is, the hallucination errors do make sense if you think of how an LLM is just a statistical relationship between tokens.

One thing to emphasize, I'm not saying the "understanding" that humans seem to possess isn't just some lower level statistical process - I'm not "invoking a soul". But I am saying it appears to be fundamentally different, and in many cases more useful, than what an LLM can do.

> because humans generally don't make the types of hallucination errors that LLMs make.

They do though - I've noticed myself and others saying things in conversation that sound kind of right, and are based on correct things they've learned previously, but because memory of those things is only partial and mixed with other related information things are often said that are quite incorrect or combine two topics in a way that doesn't make sense.