Hacker News new | past | comments | ask | show | jobs | submit
> The argument about AGI from LLMs is not based on the current state of LLMs, but on the rate of progress over the last 5+ years or so.

And what I'm saying is that I find that argument to be incredibly weak. I've seen it time and time again, and honestly at this point just feels like a "humans should be a hundred feet tall based on on their rate of change in their early years" argument.

While I've also been amazed at the past progress in LLMs, I don't see any reason to expect that rate will continue in the future. What I do see the more and more I use the SOTA models is fundamental limitations in what LLMs are capable of.

Expecting the rate of progress to drop off so abruptly after realistically just a few years of serious work on the problem seems like the more unreasonable and grander prediction to me than expecting it to continue at its current pace for even just 5 more years.
loading story #41449098
loading story #41459456
10 years of progress is a flash in the pan of human progress. The first deep learning models that worked appeared in 2012. That was like yesterday. You are completely underestimating the rate of change we are witnessing. Compute scaling is not at all similar to biological scaling.
Happy to review this in 5 years