Hacker News new | past | comments | ask | show | jobs | submit
If its true that predicting the next word can be turned into predict the next pixel. And that you could run a zillion hours of video feed into that, I agree. It seems that the basic algorithm is there. Video is much less information dense than text, but if the scale of compute can reach the 10s of billions of dollars, or more, you have to expect that AGI is achievable. I think we will see it in our lifetimes. Its probably 5 years away
I feel like that's already been demonstrated with the first-generation video generation models we're seeing. Early research already shows video generation models can become world simulators. There frankly just isn't enough compute yet to train models large enough to do this for all general phenomena and then make it available to general users. It's also unclear if we have enough training data.

Video is not necessarily less information dense than text, because when considered in its entirety it contains text and language generation as special cases. Video generation includes predicting continuations of complex verbal human conversations as well as continuations of videos of text exchanges, someone flipping through notes or a book, someone taking a university exam through their perspective, etc.