Hacker News new | past | comments | ask | show | jobs | submit
"There are maybe a few hundred people in the world who viscerally understand what's coming. Most are at DeepMind / OpenAI / Anthropic / X but some are on the outside. You have to be able to forecast the aggregate effect of rapid algorithmic improvement, aggressive investment in building RL environments for iterative self-improvement, and many tens of billions already committed to building data centers. Either we're all wrong, or everything is about to change." - Vedant Misra, Deepmind Researcher.

Maybe your calibration isn't poor. Maybe they really are all wrong but there's a tendency here to these these people behind the scenes are all charlatans, fueling hype without equal substance hoping to make a quick buck before it all comes crashing down, but i don't think that's true at all. I think these people really genuinely believe they're going to get there. And if you genuinely think that, them this kind of investment isn't so crazy.

The problem is, they are hugely incentivised to hype to raise funding. It’s not whether they are “wrong”, it’s whether they are being realistic.

The argument presented in the quote there is: “everyone in AI foundation companies are putting money into AI, therefore we must be near AGI.”

The best evaluation of progress is to use the tools we have. It doesn’t look like we are close to AGI. It looks like amazing NLP with an enormous amount of human labelling.

Absolutely. Look at how Sam Altman speaks.

If you've taken a couple of lectures about AI, you've probably been taught not to anthropomorphize your own algorithms, especially given how the masses think of AI (in terms of Skynet, Cortana, "Her", Ex Machina, etc). It encourages people to mistake the capabilities of the models and ascribe to them all of the traits of AI they've seen in TV and movies.

Sam has ignored that advice, and exploited the hype that can be generated by doing so. He even tried to mimic the product in "Her", down to the voice [0]. The old board said his "outright lying" made it impossible to trust him [1]. That behavior raises eyebrows, even if he's got a legitimate product.

[0]: https://www.wired.com/story/openai-gpt-4o-chatgpt-artificial...

[1]: https://www.theverge.com/2024/5/28/24166713/openai-helen-ton...

loading story #42810686
>The problem is, they are hugely incentivised to hype to raise funding.

Hype is extremely normal. Everyone with a business gets the chance to hype for the purpose of funding. That alone isn't going to get several of the biggest tech giants in the world to pour billions.

Satya just said, "he has his 80 billion ready". Is Microsoft an "AI foundation company" ? Is Google ? Is Meta ?

The point is the old saying - "Put your money where your mouth is". People can say all sorts of things but what they choose to spend their money on says a whole lot.

And I'm not saying this means the investment is guaranteed to be worth it.

The newest US president announced this within the 48 hours of assuming office. Hype alone couldn't set such a big wheel in motion.
loading story #42801975
> there's a tendency here to these these people behind the scenes are all charlatans, fueling hype without equal substance hoping to make a quick buck before it all comes crashing down, but i don't think that's true at all. I think these people really genuinely believe they're going to get there.

I don't immediately disagree with you but you just accidentally also described all crypto/NFT enthusiasts of a few years ago.

NFTs couldn't pass the Turing test, something I didn't expect to witness in my lifetime.

The two are qualitatively different.

loading story #42795527
loading story #42797677
loading story #42794085
loading story #42801824
It's identical energy. A significant number of people are attaching their hopes and dreams to a piece of technology while deluding themselves about the technical limitations of that technology. It's all rooted in greed. Relatively few are in it to push humanity forward, most are just trying to "get theirs."
Well Crypto had nowhere near the uptake [0] and investment (even leaving this announcement aside, several of the biggest tech giants are pouring billions into this).

At any rate, I'm not saying this means that all this investment is guaranteed to pay off.

[0] With 300 million weekly active users/1 billion messages per day and #8 in visits worldwide the last few months just 2 years after release, ChatGPT is the software product with the fastest adoption ever.

Motivated reasoning sings nicely to the tune of billions of dollars. None of these folks will ever say, "don't waste money on this dead end". However, it's clear that there is still a lot of productive value to extract from transformers and certainly there will be other useful things that appear along the way. It's not the worst investment I can imagine, even if it never leads to "AGI"
Yeah people don't rush to say "don't waste money on this dead end" but think about it for a moment.

A 500B dollar investment doesn't just fall into one's lap. It's not your run of the mill funding round. No, this is something you very actively work towards that your funders must be really damn convinced is worth the gamble. No one sane is going to look at what they genuinely believe to be a dead end and try to garner up Manhattan Project scales of investment. Careers have been nuked for far less.

loading story #42791019
loading story #42790447
I am not qualified to make any assumptions but I do wonder if a massive investment into computing infrastructure serves national security purposes beyond AI. Like building subway stations that also happen to serve as bomb shelters.

Are there computing and cryptography problems that the infrastructure could be (publicly or quietly) reallocated to address if the United States found itself in a conflict? Any cryptographers here have a thought on whether hundreds of thousands of GPUs turned on a single cryptographic key would yield any value?

I'm not a cryptographer, nor am I good with math (actually I suck badly; consider yourself warned...), but am I curious about how threatened password hashes should feel if the 'AI juggernauts' suddenly fancy themselves playing on the red team, so I quickly did some (likely poor) back-of-the-napkin calculations.

'Well known' password notwithstanding, let's use the following as a password:

correct-horse-battery-staple

This password is 28 characters long, and whilst it could be stronger with uppercase letters, numbers, and special characters, it still shirtfronts a respectable ~1,397,958,111 decillion (1.39 × 10^42) combinations for an unsuspecting AI-turned-hashcat cluster to crack. Let's say this password was protected by SHA2-256 (assuming no cryptographic weaknesses exist (I haven't checked, purely for academic purposes)), and that at least 50% of hashes would need to be tested before 'success' flourishes (lets try to make things a bit exciting...).

I looked up a random benchmark for hashcat, and found an average of 20 gigahashs/second (GH/s) for a single RTX 4090.

If we throw 100 RTX 4090s at this hashed password, assuming a uniform 20 GH/s (combined firepower of 2,000 GH/s) and absolutely perfect running conditions, it would take at least eleven-nonillion-fifty octillion (1.105 x 10^31) years to crack. Earth will be long gone by the time that rolls around.

Turning up the heat (perhaps literally) by throwing 1,000,000 RTX 4090s at this hashed password, assuming the same conditions, doesn't help much (in terms of Earth's lifespan): two-octillion-two-hundred-ten septillion (2.21 x 10^27) years.

Using some recommended password specifications from NIST - 15 characters comprised of upper and lower-case letters, numbers, and special characters - lets try:

dXIl5p*Vn6Gt#BH

Despite the higher complexity, this password only just eeks out a paltry ~ 41 sextillion (4.11 × 10^22) possible combinations. Throwing 100 RTX 4090s at this password would, rather worryingly, only take around three hundred twenty-six billion seven hundred thirteen million two hundred seventeen thousand (326,713,217,000) years to have a 50% chance of success. My calculator didn't even turn my answer into a scientific number!

More alarming still, is when 1,000,000 RTX 4090s get sic'ed on the shorter hashed password: around thirty-two million six hundred seventy-one thousand (32,671,000) years to knock down half of this hashed password's strength.

I read a report that suggested Microsoft aimed to have 1.8 million GPUs by the end of 2024. We'll probably be safe for at least the next six months or so. All bets are off after that.

All I dream about is the tital wave of cheap high-performance GPUs flooding the market when the AI bubble bursts, so I can finally run Farcry at 25 frames per second for less than a grand.

>Maybe they really are all wrong

All? Quite a few of the best minds in the field, like Yann LeCun for example, have been adamant that 1) autoregressive LLMs are NOT the path to AGI and 2) that AGI is very likely NOT just a couple of years away.

You have hit on something that really bothers me about recent AGI discourse. It’s common to claim that “all” researchers agree that AGI is imminent, and yet when you dive into these claims “all” is a subset of researchers that excludes everyone in academia, people like Yann, and others.

So the statement becomes tautological “all researchers who believe that AGI is imminent believe that AGI is imminent”.

And of course, OpenAI and the other labs don’t perform actual science any longer (if science requires some sort of public sharing of information), so they win every disagreement by claiming that if you could only see what they have behind closed doors, you’d become a true believer.

loading story #42796632
loading story #42798385
I'm inclined to agree with Yann about true AGI, but he works at Meta and they seem to think current LLM's are sufficiently useful to be dumping preposterous amounts of money at them as well.

It may be a distinction thats not worth making if the current approach is good enough to completely transform society and make infinite money

loading story #42794160
It's obviously not taken to mean literally everybody.

Whatever LeCun says and really even he has said "AGI is possible in 5 to 10 years" as recently as 2 months ago (so if that's the 'skeptic' opinion, you can only imagine what a lot of people are thinking), Meta has and is pouring a whole lot of money into LLM development. "Put your money where your mouth is" as they say. People can say all sorts of things but what they choose to focus their money on tells a whole lot.

Who says they will stick to autoregressive LLMs?
I think it will be in between, like most things end up being. I don't think they are charlatans at all, but I think they're probably a bit high on their own supply. I think it's true that "everything is about to change", but I think that change will look more like the status quo than the current hype cycle suggests. There are a lot of periods in history when "everything changed", and I believe we're already a number of years into one of those periods now, but in all those cases, despite "everything" changing, a perhaps surprising number of things remained the same. I think this will be no different than that. But it's hard, impossible really, to accurately predict where the chips will land.
My prediction is a Apple loses to Open AI who releases a H.E.R. (like the movie) like phone. She is seen on your lock screen a la a Facetime call UI/UX and she can be skinned to look like whoever; i.e. a deceased loved one.

She interfaces with AI Agents of companies, organizations, friends, family, etc to get things done for you (or to learn from..what's my friends bday his agent tells yours) automagically and she is like a friend. Always there for you at your beckon call like in the movie H.E.R.

Zuckerberg's glasses that can not take selfies will only be complimentary to our AI phones.

That's just my guess and desire as fervent GPT user, as well a Meta Ray Ban wearer (can't take selfies with glasses).

My take on this is that, despite an ever-increasingly connected world, you still need an assistant like this to remain available at all times your device is. If I can’t rely on it when my signal is weak, or the network/service is down/saturated, its way of working itself into people’s core routines is minimal. So either the model runs locally, in which case I’d argue OpenAI have no moat, or they uncover some secret sauce they’re able to keep contained to their research labs and data centres that’s simply that much better than the rest, in perpetuity, and is so good people are willing to undergo the massive switching costs and tolerate the situations in which the service they’ve come to be so dependent on isn’t available to them. Let’s also not discount the fact that Apple are one of the largest manufacturers globally of smartphones, and that getting up to speed in the myriad industries required to compete with them, even when contracting out much of that work, is hard.
loading story #42790168
I still fail to see who desire that, how it benefits humanity, or why we need to invest 500b to get to this
loading story #42794607
Sorry, you live in a different world, google glasses were aggressively lame, the ray bans only slightly less so.

But pulling out your phone to talk to it like a friend...

loading story #42791567
Very insightful take on agents interacting with agents thanks for sharing.

Re H.E.R phone - I see people already trying to build this type of product, one example: https://www.aphoneafriend.com

I am hoping it is just the usual ponzi thing.
How would this be a Ponzi scheme? Who are the leaf nodes ending up holding the bag?
loading story #42794210
loading story #42794216
So they're either wrong or building Skynet.