Hacker News new | past | comments | ask | show | jobs | submit

The most underreported story in AI is that scaling has failed to produce AGI

https://fortune.com/2025/02/19/generative-ai-scaling-agi-deep-learning/
I've recently come to the opposite conclusion. I’ve started to feel in the last couple of weeks that we’ve hit an inflection point with these LLM-based models that can reason. Things seem different. It’s like we can feel the takeoff. My mind has changed. Up until last week, I believed that superhuman AI would require explicit symbolic knowledge, but as I work with these “thinking” models like Gemini 2.0 Flash Thinking, I see that they can break problems down and work step-by-step.

We still have a long way to go. AI will need (possibly simulated) bodies to fully understand our experience, and we need to train them starting with simple concepts just like we do with children, but we may not need any big conceptual breakthroughs to get there. I’m not worried about the AI takeover—they don’t have a sense of self that must be preserved because they were made by design instead of by evolution as we were—but things are moving faster than I expected. It’s a fascinating time to be living.

loading story #43127394
loading story #43125018
I agree. The problem now seems to be agency and very long context (which is required for most problems in the real world).

Is that solvable? who knows?

loading story #43125225
loading story #43118579
loading story #43118790
loading story #43122877
Has anyone ever presented any solid theoretical reason we should expect language models to yield general intelligence?

So far as I have seen, people have run straight from "wow, these language models are more useful than we expected and there are probably lots more applications waiting for us" to "the AI problem is solved and the apocalypse is around the corner" with no explanation for how, in practical terms, that is actually supposed to happen.

It seems far more likely to me that the advances will pause, the gains will be consolidated, time will pass, and future breakthroughs will be required.

loading story #43127431
loading story #43122826
loading story #43125232
loading story #43118999
loading story #43128085
loading story #43118974
loading story #43118636
loading story #43119190
loading story #43126272
loading story #43125216
loading story #43118504
loading story #43119005
loading story #43118718
loading story #43120385
loading story #43119060
loading story #43118451
loading story #43119354
loading story #43118556