Hacker News new | past | comments | ask | show | jobs | submit
In general VC is about investing in a large number of companies that mostly fail, and trying to weight the portfolio to catch the few black swans that generate insane returns. Any individual investment is likely to fail, but you want to have a thesis for 1) why it could theoretically be a black swan, and 2) strong belief in the team to execute. Here's a thesis for both of these for SSI:

1. The black swan: if AGI is achievable imminently, the first company to build it could have a very strong first mover advantage due to the runaway effect of AI that is able to self-improve. If SSI achieves intelligence greater than human-level, it will be faster (and most likely dramatically cheaper) for SSI to self-improve than anyone external can achieve, including open-source. Even if open-source catches up to where SSI started, SSI will have dramatically improved beyond that, and will continue to dramatically improve even faster due to it being more intelligent.

2: The team. Basically, Ilya Sutskever was one of the main initial brains behind OpenAI from a research perspective, and in general has contributed immensely to AI research. Betting on him is pretty easy.

I'm not surprised Ilya managed to raise a billion dollars for this. Yes, I think it will most likely fail: the focus on safety will probably slow it down relative to open source, and this is a crowded space as it is. If open source gets to AGI first, or if it drains the market of funding for research labs (at least, research labs disconnected from bigtech companies) by commoditizing inference — and thus gets to AGI first by dint of starving its competitors of oxygen — the runaway effects will favor open-source, not SSI. Or if AGI simply isn't achievable in our lifetimes, SSI will die by failing to produce anything marketable.

But VC isn't about betting on likely outcomes, because no black swans are likely. It's about black swan farming, which means trying to figure out which things could be black swans, and betting on strong teams working on those.

On the other hand, it may be that "Alignment likely generalizes further than capabilities." - https://www.beren.io/2024-05-15-Alignment-Likely-Generalizes...
loading story #41454425
Another take is defining AGI from an economic perspective. If AI can do a job that would normally be paid a salary, then it could be paid similarly or at a smaller price which is still big.

OpenAI priced its flagship chatbot ChatGPT on the low end for early product adoption. Let's see what jobs get replaced this year :)

How will we know when we have achieved AGI with intelligence greater than human-level?
loading story #41453929
loading story #41454913
loading story #41454028