I feel like that's already been demonstrated with the first-generation video generation models we're seeing. Early research already shows video generation models can become world simulators. There frankly just isn't enough compute yet to train models large enough to do this for all general phenomena and then make it available to general users. It's also unclear if we have enough training data.
Video is not necessarily less information dense than text, because when considered in its entirety it contains text and language generation as special cases. Video generation includes predicting continuations of complex verbal human conversations as well as continuations of videos of text exchanges, someone flipping through notes or a book, someone taking a university exam through their perspective, etc.