Hacker News new | past | comments | ask | show | jobs | submit
Has anyone ever presented any solid theoretical reason we should expect language models to yield general intelligence?

So far as I have seen, people have run straight from "wow, these language models are more useful than we expected and there are probably lots more applications waiting for us" to "the AI problem is solved and the apocalypse is around the corner" with no explanation for how, in practical terms, that is actually supposed to happen.

It seems far more likely to me that the advances will pause, the gains will be consolidated, time will pass, and future breakthroughs will be required.

loading story #43122826
loading story #43127431
loading story #43125232
loading story #43118999