Hacker News new | past | comments | ask | show | jobs | submit
This being ycombinator and as such ostensibly has one or two (if not more) VCs as readers/commentators … can someone please tell me how these companies that are being invested in in the AI space are going to make returns on the money invested? What’s the business plan? (I’m not rich enough to be in these meetings) I just don’t see how the returns will happen.

Open source LLMs exist and will get better. Is it just that all these companies will vie for a winner-take-all situation where the “best” model will garner the subscription? Doesn’t OpenAI make some substantial part of the revenue for all the AI space? I just don’t see it. But I don’t have VC levels of cash to bet on a 10x or 100x return so what do I know?

VCs at the big/mega funds make most of their money from fees, they don't actually care as much about the potential portfolio investment exits 10-15 years from now. What they care MOST about is the ability to raise another fund in 2-3 years, so they can milk more fees from LPs. i.e. 2% fee PER YEAR on a 5bn fund is a lot of guaranteed risk-free money.

To be able to achieve that is entirely dependent on two things:

1) deploying capital in the current fund on 'sexy' ideas so they can tell LPs they are doing their job

2) paper markups, which they will get, since Ilya will most definitely be able to raise another round or two at a higher valuation. even if it eventually goes bust or gets sold at cost.

With 1) and 2), they can go back to their existing fund LPs and raise more money for their next fund and milk more fees. Getting exits and carry is just the cherry on top for these megafund VCs.

loading story #41453480
loading story #41454204
loading story #41453877
loading story #41503420
So the question I have is, who are these LP's and why are they demanding funds go into "sexy" ideas?

I mean it probably depends on the LP and what is their vision. Not all apples are red, come in many varieties and some for cider others for pies. Am I wrong?

The person you're responding to has a very sharp view of the profession. imo it's more nuanced, but not very complicated. In Capitalism, capital flows, that's how it works, capital should be deployed. Larges pools of capital are typically put to work (this in itself is nuanced). The "put to work" is various types of deployment of the capital. The simplest way to look at this is risk. Lets take pension funds because we know they invest in VC firms as LPs. Here* you can find an example of the breakdown of the investments made by this very large pension fund. You'll note most of it is very boring, and the positions held related to venture are tiny, they would need a crazy outsized swing from a VC firm to move any needles. Given all that, it traditionally* has made no sense to bet "down there" (early stage) - mostly because the expertise are not there, and they don't have the time to learn tech/product. Fee's are the cost of capital deployment at the early stages, and from what I've been told talking to folks who work at pension funds, they're happy to see VCs take swing.

but.. it really depends heavily on the LP base of the firm, and what the firm raised it's fund on, it's incredibly difficult to generalize. The funds I'm involved around as an LP... in my opinion they can get as "sexy" as they like because I buy their thesis, then it's just: get the capital deployed!!!!

Most of this is all a standard deviation game, not much more than that.

https://www.otpp.com/en-ca/investments/our-advantage/our-per... https://www.hellokoru.com/

I can't understand one thing: why are pension funds so fond of risky capital investments? What's the problem with allocating that money into shares of a bunch of old, stable companies and getting a small but steady income? I can understand if a few people with lots of disposable money are looking for some suspense and thrills, using venture capital like others use a casino. But what's the point for pension funds, which face significant problems if they lose the managed money in a risky venture?
A better way to look at it is: if pension funds are not fond of risky investments, then what am I seeing?
loading story #41457019
loading story #41453741
I'm not a VC so maybe you don't care what I think, I'm not sure.

Last night as my 8yo was listening to childrens audio books going to sleep, she asked me to have it alternate book A then B then A then B.

I thought, idunno maybe I can work out a way to do this. Maybe the app has playlists and maaaaaaaaaaybe has a way to set a playlist on repeat. Or maybe you just can't do this in the app at all. I just sat there and switched it until she fell asleep, it wasn't gonna be more than 2 or 3 anyway, and so it's kind of a dumb example.

But here's the point: Computers can process language now. I can totally imagine her telling my phone to do that and it being able to do so, even if she's the first person ever to want it to do that. I think the bet is that a very large percentage of the world's software is going to want to gain natural language superpowers. And that this is not a trivial undertaking that will be achieved by a few open source LLMs. It will be a lot of work for a lot of people to make this happen, as such a lot of money will be made along the way.

Specifically how will this unfold? Nobody knows, but I think they wanna be deep in the game when it does.

How is this any different than the (lack of) business model of all the voice assistants?

How good does it have to be, how many features does it have to have, how accurate does its need to be.. in order for people to pay anything? And how much are people actually willing to spend against the $XX Billion of investment?

Again it just seems like "sell to AAPL/GOOG/MSFT and let them figure it out".

loading story #41452026
> How is this any different than the (lack of) business model of all the voice assistants?

Feels very different to me. The dominant ones are run by Google, Apple, and Amazon, and the voice assistants are mostly add-on features that don't by themselves generate much (if any) revenue (well, aside from the news that Amazon wants to start charging for a more advanced Alexa). The business model there is more like "we need this to drive people to our other products where they will spend money; if we don't others will do it for their products and we'll fall behind".

Sure, these companies are also working on AI, but there are also a bunch of others (OpenAI, Anthropic, SSI, xAI, etc.) that are banking on AI as their actual flagship product that people and businesses will pay them to use.

Meanwhile we have "indie" voice assistants like Mycroft that fail to find a sustainable business model and/or fail to gain traction and end up shutting down, at least as a business.

I'm not sure where this is going, though. Sure, some of these AI companies will get snapped up by bigger corps. I really hope, though, that there's room for sustainable, independent businesses. I don't want Google or Apple or Amazon or Microsoft to "own" AI.

Hard to see normies signing up for monthly subs to VC funded AI startups when a surprisingly large % still are resistant to paying AAPL/GOOG for email/storage/etc. Getting a $10/mo uplift for AI functionality to your iCloud/GSuite/Office365/Prime is a hard enough sell as it stands.

And again this against CapEx of something like $200B means $100/year per user is practically rounding to 0.

Not to mention the OpEx to actually run the inference/services on top ongoing.

You'd be very surprised at how much they're raking in from the small sliver of people who do pay. It only seems small just because of how much more they make from other things. If you have a billion users, a tiny percentage of paying users is still a gazillion dollars. Getting to a billion users is the hard part. Theyre betting theyll figure how to monetize all those eyeballs when they get there.
loading story #41453697
loading story #41454855
loading story #41452517
loading story #41451365
loading story #41454851
loading story #41453732
loading story #41452556
loading story #41450967
loading story #41448766
loading story #41452275
loading story #41454905
loading story #41453916
The TMV (Total Market Value) of solving AGI is infinity. And furthermore, if AGI is solved, the TMV of pretty much everything else drops to zero.

The play here is to basically invest in all possible players who might reach AGI, because if one of them does, you just hit the infinite money hack.

And maybe with SSI you've saved the world too.

loading story #41451516
loading story #41451471
loading story #41452019
loading story #41452668
So then the investment thesis hinges on what the investor thinks AGI’s chances are. 1/100 1/1M 1/1T?

What if it never pans out is there infrastructure or other ancillary tech that society could benefit from?

For example all the science behind the LHC, or bigger and better telescopes: we might never find the theory of everything but the tech that goes into space travel, the science of storing and processing all that data, better optics etc etc are all useful tech

It's more game theory. Regardless of the chances of AGI, if you're not invested in it, you will lose everything if it happens. It's more like a hedge on a highly unlikely event. Like insurance.

And we already seeing a ton of value in LLMs. There are lots of companies that are making great use of LLMs and providing a ton of value. One just launched today in fact: https://www.paradigmai.com/ (I'm an investor in that). There are many others (some of which I've also invested in).

I too am not rich enough to invest in the foundational models, so I do the next best thing and invest in companies that are taking advantage of the intermediate outputs.

If you want safe investment you could always buy land. AGI won't be able to make more of that.
loading story #41450362
We can already make more land. See Dubai for example. And with AGI, I suspect we could rapidly get to space travel to other planets or more efficient use of our current land.

In fact I would say that one of the things that goes to values near zero would be land if AGI exists.

Perhaps but my mental model is humans will end up like landed gentry / aristos with robot servants to make stuff and will all want mansions with grounds, hence there will be a lot of land demand.
loading story #41452274
loading story #41451786
loading story #41451478
loading story #41449711
loading story #41451433
loading story #41453173
loading story #41454021
loading story #41449191
loading story #41449236
loading story #41458073
loading story #41452351
loading story #41451342
loading story #41450784
loading story #41452001
loading story #41451661
loading story #41453084
loading story #41453295
loading story #41449146
loading story #41451104
loading story #41448702
loading story #41451032
loading story #41452828
loading story #41452320
loading story #41448911
loading story #41450451
loading story #41454198
loading story #41453884
loading story #41453088
loading story #41450537
loading story #41449634
loading story #41453051
loading story #41453958
loading story #41448675
loading story #41450644
loading story #41454409
Same funding as OpenAI when they started, but SSI explicitly declared their intention not to release a single product until superintelligence is reached. Closest thing we have to a Manhattan Project in the modern era?
loading story #41446120
loading story #41446295
loading story #41452634
loading story #41446573
loading story #41446736
loading story #41446582
loading story #41447786
loading story #41448276
loading story #41453013
No. It's the next Magic Leap of our era. Or the next Juicero of our era. Or the next any of the hundreds of unprofitable startups losing billions of dollars a year without any business plan beyond VC subsidies and a hope for an exit of our era.
loading story #41452813
loading story #41446318
loading story #41453499
loading story #41446929
loading story #41449913
loading story #41451833
Getting funded by a16z is if anything a sign that the field is not hot anymore.
All money is green, regardless of level of sophistication. If you’re using investment firm pedigree as signal, gonna have a bad time. They’re all just throwin’ darts under the guise of skill (actor/observer|outcome bias; when you win, it is skill; when you lose, it was luck, broadly speaking).

> Indeed, one should be sophisticated themselves when negotiating investment to not be unduly encumbered by the unsophisticated. But let us not get too far off topic and risk subthread detachment.

Edit: @jgalt212: Indeed, one should be sophisticated themselves when negotiating investment to not be unduly encumbered by shades of the unsophisticated or potentially folks not optimizing for aligned interests. But let us not get too far off topic and risk subthread detachment. Feel free to cut a new thread for further discussion on the subject.

> All money is green, regardless of level of sophistication.

True, but most, if not all, money comes with strings attached.

loading story #41446096
loading story #41446023
loading story #41446341
Why is that?
Might be the almost securities fraud they were doing with crypto when it was fizzling out in 2022

Regardless, point is moot, money is money, and a16z's money isn't their money but other people's money

loading story #41458104
loading story #41454655
loading story #41449702
loading story #41446885
loading story #41447211
loading story #41446351
loading story #41446800
loading story #41445936
loading story #41445956
Lots of comments either defending this ("it's taking a chance on being the first to build AGI with a proven team") or saying "it's a crazy valuation for a 3 month old startup". But both of these "sides" feel like they miss the mark to me.

On one hand, I think it's great that investors are willing to throw big chunks of money at hard (or at least expensive) problems. I'm pretty sure all the investors putting money in will do just fine even if their investment goes to zero, so this feels exactly what VC funding should be doing, rather than some other common "how can we get people more digitally addicted to sell ads?" play.

On the other hand, I'm kind of baffled that we're still talking about "AGI" in the context of LLMs. While I find LLMs to be amazing, and an incredibly useful tool (if used with a good understanding of their flaws), the more I use them, the more that it becomes clear to me that they're not going to get us anywhere close to "general intelligence". That is, the more I have to work around hallucinations, the more that it becomes clear that LLMs really are just "fancy autocomplete", even if it's really really fancy autocomplete. I see lots of errors that make sense if you understand an LLM is just a statistical model of word/token frequency, but you would expect to never see these kinds of errors in a system that had a true understanding of underlying concepts. And while I'm not in the field so I may have no right to comment, there are leaders in the field, like LeCun, who have expressed basically the same idea.

So my question is, has Sutskever et al provided any acknowledgement of how they intend to "cross the chasm" from where we are now with LLMs to a model of understanding, or has it been mainly "look what we did before, you should take a chance on us to make discontinuous breakthroughs in the future"?

Ilya has discussed this question: https://www.youtube.com/watch?v=YEUclZdj_Sc
Thank you very much for posting! This is exactly what I was looking for.

On one hand, I understand what he's saying, and that's why I have been frustrated in the past when I've heard people say "it's just fancy autocomplete" without emphasizing the awesome capabilities that can give you. While I haven't seen this video by Sutskever before, I have seen a very similar argument by Hinton: in order to get really good at next token prediction, the model needs to "discover" the underlying rules that make that prediction possible.

All that said, I find his argument wholly unconvincing (and again, I may be waaaaay stupider than Sutskever, but there are other people much smarter than I who agree). And the reason for this is because every now and then I'll see a particular type of hallucination where it's pretty obvious that the LLM is confusing similar token strings even when their underlying meaning is very different. That is, the underlying "pattern matching" of LLMs becomes apparent in these situations.

As I said originally, I'm really glad VCs are pouring money into this, but I'd easily make a bet that in 5 years that LLMs will be nowhere near human-level intelligence on some tasks, especially where novel discovery is required.

loading story #41448214
loading story #41448314
I actually echo your exact sentiments. I don't have the street cred but watching him talk for the first few minutes I immediately felt like there is just no way we are going to get AGI with what we know today.

Without some raw reasoning (maybe Neuro-symbolic is the answer maybe not) capacity, LLM won't be enough. Reasoning is super tough because its not as easy as predicting the next most likely token.

loading story #41449688
loading story #41449103
loading story #41449663
loading story #41448301
loading story #41447707
loading story #41447745
loading story #41453193
loading story #41447634
>"We’ve identified a new mountain to climb that’s a bit different from what I was working on previously. We’re not trying to go down the same path faster. If you do something different, then it becomes possible for you to do something special."

Doesn't really imply let's just do more LLMs.

loading story #41447841
loading story #41447445
"It will focus on building a small highly trusted team of researchers and engineers split between Palo Alto, California and Tel Aviv, Israel."

Why Tel Aviv in Israel ?

loading story #41446354
Because it's a startup hub, there is great engineering talent there, and the cost of living is lower than the US.
loading story #41446497
loading story #41446639
loading story #41447069
loading story #41451319
loading story #41446946
loading story #41455025
loading story #41455418
“…a straight shot to safe superintelligence and in particular to spend a couple of years doing R&D on our product before bringing it to market," Gross said in an interview.”

A couple years??

loading story #41448498
well since it's no longer ok to just suck up anyone's data and train your AI, it will be a new challenge for them to avoid that pitfall. I can imagine it will take some time...
loading story #41446713
loading story #41446683
loading story #41448360
loading story #41448811
loading story #41447057
loading story #41446722
> Sutskever said his new venture made sense because he "identified a mountain that's a bit different from what I was working on."

I guess the "mountain" is the key. "Safe" alone is far from being a product. As for the current LLM, Id even question how valuable "safe" can be.

to be honestly from the way "safe" and "alignment" is perceived on r/LocalLLaMA in two years its not going to be very appealing.

We'll be able to generate most of Chat GPT4o's capabilities locally on affordable hardware including "unsafe" and "unaligned" data as the noise-to-qubits is drastically reduced meaning smaller quantized models that can run on good enough hardware.

We'll see a huge reduction in price and inference times within two years and whatever SSI is trained on won't be economically viable to recoup that $1B investment guaranteed.

all depends on GPT-5's performance. Right now Sonnet 3.5 is the best but theres nothing really ground breaking. SSI's success will depend on how much uplift it can provide over GPT-5 which already isn't expected to be significant leap beyond GPT4

loading story #41446486
loading story #41446008
loading story #41446415
loading story #41447987
loading story #41446577
loading story #41450968
loading story #41450894
loading story #41447543
loading story #41453605
loading story #41447170
loading story #41455217
loading story #41446484
loading story #41446076
loading story #41452628
loading story #41447730
loading story #41453888
loading story #41446981
loading story #41446361
loading story #41463614
loading story #41457247
loading story #41448076
loading story #41454784
loading story #41446599
loading story #41454878
loading story #41446195
loading story #41452474
loading story #41453932
loading story #41446534
loading story #41452722
loading story #41451304
$1B raise, $5B valuation. For a company that is a couple months old and doesn't have a product or even a single line of code in production. Wild.
This feels like a situation with a sold out train to a popular destination, where people are already reselling their tickets for some crazy markup, and then suddenly railway decides to add one more train car and opens flash ticket sale. Investors feeling missing out on OpenAI and others are now hoping to catch this last train ticket to the AI.
loading story #41447615
loading story #41446662
loading story #41446392
except in this case, the train driver from the original train was "sacked" (some believe unfairly), and decided to get their own train to drive. Of course, the smoothness of the ride depends on the driver of the train.
loading story #41447268
loading story #41446422
loading story #41447043
loading story #41446119
loading story #41446369
loading story #41446443
loading story #41447137
loading story #41446729
loading story #41446963
loading story #41446320
loading story #41447369
loading story #41446200
loading story #41446424
loading story #41446473
loading story #41446850
loading story #41446222
loading story #41447770
loading story #41447829
Lots of dismissive comments here.

Ilya proved himself as a leader, scientist, and engineer over the past decade with OpenAI for creating break-through after break-through that no one else had.

He’s raised enough to compete at the level of Grok, Claude, et al.

He’s offering investors a pure play AGI investment, possibly one of the only organizations available to do so.

Who else would you give $1B to pursue that?

That’s how investors think. There are macro trends, ambitious possibilities on the through line, and the rare people who might actually deliver.

A $5B valuation is standard dilation, no crazy ZIRP style round here.

If you haven’t seen investing at this scale in person it’s hard to appreciate that capital allocation just happens with a certain number of zeros behind it & some people specialize in making the 9 zero decisions.

Yes, it’s predicated on his company being worth more than $500B at some point 10 years down the line.

If they build AGI, that is a very cheap valuation.

Think how ubiquitous Siri, Alexa, chatGPT are and how terrible/not useful/wrong they’ve been.

There’s not a significant amount of demand or distribution risk here. Building the infrastructure to use smarter AI is the tech world’s obsession globally.

If AGI works, in any capacity or at any level, it will have a lot of big customers.

All I’m saying is you used the word “if” a lot there.

AGI assumes exponential, preferably infinite and continuous improvement, something unseen before in business or nature.

Neither siri nor Alexa were sold as AGI and neither alone come close to a $1B product. gpt and other LLMs has quickly become a commodity, with AI companies racing to the bottom for inference costs.

I don’t really see the plan, product wise.

Moreover you say: > Ilya proved himself as a leader, scientist, and engineer over the past decade with OpenAI for creating break-through after break-through that no one else had.

Which is absolutely true, but that doesn’t imply more breakthroughs are just around the corner, nor does the current technology suggest AGI is coming.

VCs are willing to take a $1B bet on exponential growth with a 500B upside.

Us regular folk see that and are dumbfounded because AI is obviously not going to improve exponentially forever (literally nothing in the observed universe does) and you can already see the logarithmic improvement curve. That’s where the dismissive attitude comes from.

loading story #41448943
> AI is obviously not going to improve exponentially forever (literally nothing in the observed universe does)

Sure, but it doesn't have to continue forever to be wildly profitable. If it can keep the exponential growth running for another couple of rounds, that's enough to make everyone involved rich. No-one knows quite where the limit is, so it can reasonably be worth a gamble.

loading story #41462646
loading story #41453343
loading story #41447388
loading story #41446925
loading story #41447233
loading story #41447362
loading story #41446867
loading story #41447835
loading story #41446558
loading story #41447693
loading story #41446510
loading story #41445948
loading story #41445958
loading story #41478058
loading story #41446423
loading story #41447981
loading story #41448508
loading story #41446536
loading story #41445943
loading story #41447589