Hacker News new | past | comments | ask | show | jobs | submit
>Maybe they really are all wrong

All? Quite a few of the best minds in the field, like Yann LeCun for example, have been adamant that 1) autoregressive LLMs are NOT the path to AGI and 2) that AGI is very likely NOT just a couple of years away.

You have hit on something that really bothers me about recent AGI discourse. It’s common to claim that “all” researchers agree that AGI is imminent, and yet when you dive into these claims “all” is a subset of researchers that excludes everyone in academia, people like Yann, and others.

So the statement becomes tautological “all researchers who believe that AGI is imminent believe that AGI is imminent”.

And of course, OpenAI and the other labs don’t perform actual science any longer (if science requires some sort of public sharing of information), so they win every disagreement by claiming that if you could only see what they have behind closed doors, you’d become a true believer.

Doesn't OpenAI explicitly have a "definition" of AGI that's just "it makes some money"?
>You have hit on something that really bothers me about recent AGI discourse. It’s common to claim that “all” researchers agree that AGI is imminent, and yet when you dive into these claims “all” is a subset of researchers that excludes everyone in academia, people like Yann, and others.

When the old gang at Open ai was together, Sutskever, not Sam was easily the most hypey of them all. And if you ask Norvig today, AGI is already here. 2 months ago, Lecun said he believes AGI could be here in 5 to 10 years and this is supposed to be the skeptic. This is the kind of thing i'm talking about. The idea that it's just the non academics caught in the hype is just blatantly false.

No, it doesn't have to be literally everybody to make the point.

Here's why I know that OpenAI is stuck in a hype cycle. For all of 2024, the cry from employees was "PhD level models are coming this year; just imagine what you can do when everyone has PhD level intelligence at their beck and call". And, indeed, PhD level models did arrive...if you consider GPQA to be a benchmark that is particularly meaningful in the real world. Why should I take this year's pronouncements seriously, given this?

OpenAI is what you get when you take Goodhart's Law to the extreme. They are so focused on benchmarks that they are completely blind to the rate of progress that actual matters (hint...it's not model capability in a vacuum).

Yann indeed does believe that AGI will arrive in a decade, but the important thing is that he is honest that this is an uncertain estimate and is based off of extrapolation.

I'm inclined to agree with Yann about true AGI, but he works at Meta and they seem to think current LLM's are sufficiently useful to be dumping preposterous amounts of money at them as well.

It may be a distinction thats not worth making if the current approach is good enough to completely transform society and make infinite money

Yeah, in my mind, the distinction worth making is where the inflection point from exponential growth to plateau in the s-curve of usefulness is. Have we already hit it? Are we going to hit it soon? Is it far in the future? Or is it exponential from here straight to "the singularity"?

Hard to predict!

If we've already hit it, this has already been a very short period of time during which we've seen incredibly valuable new technology commercialized, and that's nothing to sneeze at, and fortunes have and will be rightly made from it.

If it's in the near future, then a lot of people might be over-investing in the promise of future growth that won't materialize to the extent they hoped. Some people will lose their shirts, but we're still left with incredibly useful new technology.

But if we have a long (or infinite) way to go before hitting that inflection point, then the hype is justified.

It's obviously not taken to mean literally everybody.

Whatever LeCun says and really even he has said "AGI is possible in 5 to 10 years" as recently as 2 months ago (so if that's the 'skeptic' opinion, you can only imagine what a lot of people are thinking), Meta has and is pouring a whole lot of money into LLM development. "Put your money where your mouth is" as they say. People can say all sorts of things but what they choose to focus their money on tells a whole lot.

Who says they will stick to autoregressive LLMs?