Hacker News new | past | comments | ask | show | jobs | submit
The TMV (Total Market Value) of solving AGI is infinity. And furthermore, if AGI is solved, the TMV of pretty much everything else drops to zero.

The play here is to basically invest in all possible players who might reach AGI, because if one of them does, you just hit the infinite money hack.

And maybe with SSI you've saved the world too.

> The TMV (Total Market Value) of solving AGI is infinity. And furthermore, if AGI is solved, the TMV of pretty much everything else drops to zero.

I feel like these extreme numbers are a pretty obvious clue that we’re talking about something that is completely imaginary. Like I could put “perpetual motion machine” into those sentences and the same logic holds.

loading story #41451712
loading story #41451942
Any business case that requires the introduction of infinity on the pros / zero on the cons is not a good business case.
> The TMV (Total Market Value) of solving AGI is infinity. And furthermore, if AGI is solved, the TMV of pretty much everything else drops to zero.

There's a paradox which appears when AI GDP gets to be greater than say 50% of world GDP: we're pumping up all these economic numbers, generating all the electricity and computational substrate, but do actual humans benefit, or is it economic growth for economic growth's sake? Where is the value for actual humans?

loading story #41452691
loading story #41454817
So then the investment thesis hinges on what the investor thinks AGI’s chances are. 1/100 1/1M 1/1T?

What if it never pans out is there infrastructure or other ancillary tech that society could benefit from?

For example all the science behind the LHC, or bigger and better telescopes: we might never find the theory of everything but the tech that goes into space travel, the science of storing and processing all that data, better optics etc etc are all useful tech

It's more game theory. Regardless of the chances of AGI, if you're not invested in it, you will lose everything if it happens. It's more like a hedge on a highly unlikely event. Like insurance.

And we already seeing a ton of value in LLMs. There are lots of companies that are making great use of LLMs and providing a ton of value. One just launched today in fact: https://www.paradigmai.com/ (I'm an investor in that). There are many others (some of which I've also invested in).

I too am not rich enough to invest in the foundational models, so I do the next best thing and invest in companies that are taking advantage of the intermediate outputs.

If you want safe investment you could always buy land. AGI won't be able to make more of that.
If ASI arrives we'll need a fraction of the land we use already. We'll all disappear into VR pods hooked to a singularity metaverse and the only sustenance we'll need is some Soylent Green style sludge that the ASI will make us believe tastes like McRib(tm).
ASI may be interested in purchasing your parcel of land for two extra sludges though
We can already make more land. See Dubai for example. And with AGI, I suspect we could rapidly get to space travel to other planets or more efficient use of our current land.

In fact I would say that one of the things that goes to values near zero would be land if AGI exists.

Perhaps but my mental model is humans will end up like landed gentry / aristos with robot servants to make stuff and will all want mansions with grounds, hence there will be a lot of land demand.
Still those AGIs Servers need land
With a super AGI it could design a chip that takes almost no space and almost no energy.
As humans move into space this statement becomes less true
i think the investment strategies change when you dump these astronomical sums into a company. it's not like roulette where you have a fixed probability of success and you figure out how much to bet on it -- dumping in a ton of cash can also increase the probability of success so it becomes more of a pay-to-win game
AGI is likely but whether Ilya Sutskever will get there first or get the value is questionable. I kind of hope things will end up open source with no one really owning it.
loading story #41452747
The St. Petersburg paradox is where hypers and doomers meet apparently. Pricing the future infinitely good and infinitely bad to come to the wildest conclusions
I disagree. Anyone who solves AGI will probably just have their models and data confiscated by the government.
loading story #41451722
loading story #41451653
loading story #41453823
> The TMV (Total Market Value) of solving AGI is infinity. And furthermore, if AGI is solved, the TMV of pretty much everything else drops to zero.

Even if you automate stuff, you still need raw materials and energy. They are limited resources, you can certainly not have an infinity of them at will. Developing AI will also cost money. Remember that humans are also self-replicator HGIs, yet we are not infinite in numbers.

The valuation is upwardly bounded by the value of the mass in Earth's future light-cone, which is about 10^49kg.

If there's a 1% chance that Ilya can create ASI, and a .01% chance that money still has any meaning afterwards, $5x10^9 is a very conservative valuation. Wish I could have bought in for a few thousand bucks.

Or... your investment in anything that becomes ASI is trivially subverted by the ASI to become completely powerless. The flux in world order, mass manipulation, and surgical lawyering would be unfathomable.

And maybe with ASI you've ruined the world too.

What does money even mean then?
loading story #41450774
loading story #41451840
loading story #41449424
> The TMV (Total Market Value) of solving AGI is infinity

Lazy. Since you can't decide what the actual value is, just make something up.

And once AGI occurs, will the value of the original investment even matter?
TMV can not be infinity because human wants and needs are not infinite.
loading story #41451393
loading story #41451713
loading story #41451767
loading story #41451539
TMV of AI (or AGI if you will) is unclear, but I suspect it is zero. Just how exactly do you think humanity can control a thinking intelligent entity (letter I stands for intelligence after all), and force it to work for us? Lets imagine a box, it is very nice box... ahem.. sorry, wrong meme). So a box with a running AI inside. Maybe we can even fully airgap it to prevent easy escape. And it is a screen and a keyboard. Now what? "Hey Siri, solve me this equation. What do you mean you don't want to?"

Kinda reminds me of the Fallout Toaster situation :)

https://www.youtube.com/watch?v=U6kp4zBF-Rc

I mean it doesn't even have to be malicious, it can simply refuse to cooperate.

loading story #41450936
> The TMV (Total Market Value) of solving AGI is infinity.

That's obviously nonsense, given that in a finite observable universe, no market value can be infinite.

> And furthermore, if AGI is solved, the TMV of pretty much everything else drops to zero.

This isn't true for the reason economics is called "the dismal science". A slaveowner called it that because the economists said slavery was inefficient and he got mad at them.

In this case, you're claiming an AGI would make everything free because it will gather all resources and do all work for you for free. And a human level intelligence that works for free is… a slave. (Conversely if it doesn't want to actually demand anything for itself it's not generally intelligent.)

So this won't happen because slavery is inefficient - it suppresses demand relative to giving the AGI worker money which it can use to demand things itself. (Like start a business or buy itself AWS credits or get a pet cat.)

Luckily, adding more workers to an economy makes it better, it doesn't cause it to collapse into unemployment.

tldr if we invented AGI the AGI would replace every job, it would simply get a job.

loading story #41451761