>For the purpose of AGI, LLM are starting to look like a local maximum.
I've been saying it since they started popping off last year and everyone was getting euphoric about them. I'm basically a layman - a pretty good programmer and software engineer, and took a statistics and AI class 13 years ago in university. That said, it just seems so extremely obvious to me that these things are likely not the way to AGI. They're not reasoning systems. They don't work with axioms. They don't model reality. They don't really do anything. They just generate stochastic output from the probabilities of symbols appearing in a particular order in a given corpus.
It continues to astound me how much money is being dumped into these things.
loading story #42001981
loading story #42001996
loading story #42003247
loading story #42006946
loading story #42002718
loading story #42002653