> Humans have a long list of cognitive shortcomings. We find them interesting and give them all sorts of names like cognitive dissonance or optical illusions. But we don't currently make silly conclusions like humans don't reason.
Exactly! In fact, things like illusions are actually excellent windows into how the mind really works. Most visual illusions are a fundamental artifact of how the brain needs to turn a 2D image into a 3D, real-world model, and illusions give clues into how it does that, and how the contours of the natural world guided the evolution of the visual system (I think Steven Pinker's "How the Mind Works" gives excellent examples of this).
So I am not at all saying that what LLMs do isn't extremely interesting, or useful. What I am saying is that the types of errors you get give a window into how an LLM works, and these hint at some fundamental limitations at what an LLM is capable of, particularly around novel discovery and development of new ideas and theories that aren't just "rearrangements" of existing ideas.
ANN architectures are not like brains. They don't come pre-baked with all sorts of evolutionary steps and tweaking. They're far more blank slate and the transformer is one of the most blank slate there is.
Mostly at best, maybe some failure mode in GPT-N gives insight to how some concept is understood by GPT-N. It rarely will say anything about language modelling or Transformers. GPT-2 had some wildly different failure modes than 3, which itself has some wildly different failure modes to 4.
All a transformer's training objective asks it to do is spit out a token. How it should do so is left for transformer to figure along the way and everything is fair game.
And confusing words with wildly different meanings but with some similarity in some other way is something that happens to humans as well. Transformers don't see words or letters(but tokens). So just because it doesn't seem to you like two tokens should be confused doesn't mean there isn't a valid point of confusion there.