The most underreported story in AI is that scaling has failed to produce AGI
https://fortune.com/2025/02/19/generative-ai-scaling-agi-deep-learning/We still have a long way to go. AI will need (possibly simulated) bodies to fully understand our experience, and we need to train them starting with simple concepts just like we do with children, but we may not need any big conceptual breakthroughs to get there. I’m not worried about the AI takeover—they don’t have a sense of self that must be preserved because they were made by design instead of by evolution as we were—but things are moving faster than I expected. It’s a fascinating time to be living.
Is that solvable? who knows?
So far as I have seen, people have run straight from "wow, these language models are more useful than we expected and there are probably lots more applications waiting for us" to "the AI problem is solved and the apocalypse is around the corner" with no explanation for how, in practical terms, that is actually supposed to happen.
It seems far more likely to me that the advances will pause, the gains will be consolidated, time will pass, and future breakthroughs will be required.