Hacker News new | past | comments | ask | show | jobs | submit
The argument about AGI from LLMs is not based on the current state of LLMs, but on the rate of progress over the last 5+ years or so. It wasn't very long ago that almost nobody outside of a few niche circles seriously thought LLMs could do what they do right now.

That said, my personal hypothesis is that AGI will emerge from video generation models rather than text generation models. A model that takes an arbitrary real-time video input feed and must predict the next, say, 60 seconds of video would have to have a deep understanding of the universe, humanity, language, culture, physics, humor, laughter, problem solving, etc. This pushes the fidelity of both input and output far beyond anything that can be expressed in text, but also creates extraordinarily high computational barriers.

> The argument about AGI from LLMs is not based on the current state of LLMs, but on the rate of progress over the last 5+ years or so.

And what I'm saying is that I find that argument to be incredibly weak. I've seen it time and time again, and honestly at this point just feels like a "humans should be a hundred feet tall based on on their rate of change in their early years" argument.

While I've also been amazed at the past progress in LLMs, I don't see any reason to expect that rate will continue in the future. What I do see the more and more I use the SOTA models is fundamental limitations in what LLMs are capable of.

loading story #41448205
loading story #41448155
loading story #41449030
If its true that predicting the next word can be turned into predict the next pixel. And that you could run a zillion hours of video feed into that, I agree. It seems that the basic algorithm is there. Video is much less information dense than text, but if the scale of compute can reach the 10s of billions of dollars, or more, you have to expect that AGI is achievable. I think we will see it in our lifetimes. Its probably 5 years away
loading story #41448198