Open source LLMs exist and will get better. Is it just that all these companies will vie for a winner-take-all situation where the “best” model will garner the subscription? Doesn’t OpenAI make some substantial part of the revenue for all the AI space? I just don’t see it. But I don’t have VC levels of cash to bet on a 10x or 100x return so what do I know?
To be able to achieve that is entirely dependent on two things:
1) deploying capital in the current fund on 'sexy' ideas so they can tell LPs they are doing their job
2) paper markups, which they will get, since Ilya will most definitely be able to raise another round or two at a higher valuation. even if it eventually goes bust or gets sold at cost.
With 1) and 2), they can go back to their existing fund LPs and raise more money for their next fund and milk more fees. Getting exits and carry is just the cherry on top for these megafund VCs.
I mean it probably depends on the LP and what is their vision. Not all apples are red, come in many varieties and some for cider others for pies. Am I wrong?
but.. it really depends heavily on the LP base of the firm, and what the firm raised it's fund on, it's incredibly difficult to generalize. The funds I'm involved around as an LP... in my opinion they can get as "sexy" as they like because I buy their thesis, then it's just: get the capital deployed!!!!
Most of this is all a standard deviation game, not much more than that.
https://www.otpp.com/en-ca/investments/our-advantage/our-per... https://www.hellokoru.com/
Last night as my 8yo was listening to childrens audio books going to sleep, she asked me to have it alternate book A then B then A then B.
I thought, idunno maybe I can work out a way to do this. Maybe the app has playlists and maaaaaaaaaaybe has a way to set a playlist on repeat. Or maybe you just can't do this in the app at all. I just sat there and switched it until she fell asleep, it wasn't gonna be more than 2 or 3 anyway, and so it's kind of a dumb example.
But here's the point: Computers can process language now. I can totally imagine her telling my phone to do that and it being able to do so, even if she's the first person ever to want it to do that. I think the bet is that a very large percentage of the world's software is going to want to gain natural language superpowers. And that this is not a trivial undertaking that will be achieved by a few open source LLMs. It will be a lot of work for a lot of people to make this happen, as such a lot of money will be made along the way.
Specifically how will this unfold? Nobody knows, but I think they wanna be deep in the game when it does.
How good does it have to be, how many features does it have to have, how accurate does its need to be.. in order for people to pay anything? And how much are people actually willing to spend against the $XX Billion of investment?
Again it just seems like "sell to AAPL/GOOG/MSFT and let them figure it out".
Feels very different to me. The dominant ones are run by Google, Apple, and Amazon, and the voice assistants are mostly add-on features that don't by themselves generate much (if any) revenue (well, aside from the news that Amazon wants to start charging for a more advanced Alexa). The business model there is more like "we need this to drive people to our other products where they will spend money; if we don't others will do it for their products and we'll fall behind".
Sure, these companies are also working on AI, but there are also a bunch of others (OpenAI, Anthropic, SSI, xAI, etc.) that are banking on AI as their actual flagship product that people and businesses will pay them to use.
Meanwhile we have "indie" voice assistants like Mycroft that fail to find a sustainable business model and/or fail to gain traction and end up shutting down, at least as a business.
I'm not sure where this is going, though. Sure, some of these AI companies will get snapped up by bigger corps. I really hope, though, that there's room for sustainable, independent businesses. I don't want Google or Apple or Amazon or Microsoft to "own" AI.
And again this against CapEx of something like $200B means $100/year per user is practically rounding to 0.
Not to mention the OpEx to actually run the inference/services on top ongoing.
The play here is to basically invest in all possible players who might reach AGI, because if one of them does, you just hit the infinite money hack.
And maybe with SSI you've saved the world too.
What if it never pans out is there infrastructure or other ancillary tech that society could benefit from?
For example all the science behind the LHC, or bigger and better telescopes: we might never find the theory of everything but the tech that goes into space travel, the science of storing and processing all that data, better optics etc etc are all useful tech
And we already seeing a ton of value in LLMs. There are lots of companies that are making great use of LLMs and providing a ton of value. One just launched today in fact: https://www.paradigmai.com/ (I'm an investor in that). There are many others (some of which I've also invested in).
I too am not rich enough to invest in the foundational models, so I do the next best thing and invest in companies that are taking advantage of the intermediate outputs.
In fact I would say that one of the things that goes to values near zero would be land if AGI exists.
> Indeed, one should be sophisticated themselves when negotiating investment to not be unduly encumbered by the unsophisticated. But let us not get too far off topic and risk subthread detachment.
Edit: @jgalt212: Indeed, one should be sophisticated themselves when negotiating investment to not be unduly encumbered by shades of the unsophisticated or potentially folks not optimizing for aligned interests. But let us not get too far off topic and risk subthread detachment. Feel free to cut a new thread for further discussion on the subject.
True, but most, if not all, money comes with strings attached.
On one hand, I think it's great that investors are willing to throw big chunks of money at hard (or at least expensive) problems. I'm pretty sure all the investors putting money in will do just fine even if their investment goes to zero, so this feels exactly what VC funding should be doing, rather than some other common "how can we get people more digitally addicted to sell ads?" play.
On the other hand, I'm kind of baffled that we're still talking about "AGI" in the context of LLMs. While I find LLMs to be amazing, and an incredibly useful tool (if used with a good understanding of their flaws), the more I use them, the more that it becomes clear to me that they're not going to get us anywhere close to "general intelligence". That is, the more I have to work around hallucinations, the more that it becomes clear that LLMs really are just "fancy autocomplete", even if it's really really fancy autocomplete. I see lots of errors that make sense if you understand an LLM is just a statistical model of word/token frequency, but you would expect to never see these kinds of errors in a system that had a true understanding of underlying concepts. And while I'm not in the field so I may have no right to comment, there are leaders in the field, like LeCun, who have expressed basically the same idea.
So my question is, has Sutskever et al provided any acknowledgement of how they intend to "cross the chasm" from where we are now with LLMs to a model of understanding, or has it been mainly "look what we did before, you should take a chance on us to make discontinuous breakthroughs in the future"?
On one hand, I understand what he's saying, and that's why I have been frustrated in the past when I've heard people say "it's just fancy autocomplete" without emphasizing the awesome capabilities that can give you. While I haven't seen this video by Sutskever before, I have seen a very similar argument by Hinton: in order to get really good at next token prediction, the model needs to "discover" the underlying rules that make that prediction possible.
All that said, I find his argument wholly unconvincing (and again, I may be waaaaay stupider than Sutskever, but there are other people much smarter than I who agree). And the reason for this is because every now and then I'll see a particular type of hallucination where it's pretty obvious that the LLM is confusing similar token strings even when their underlying meaning is very different. That is, the underlying "pattern matching" of LLMs becomes apparent in these situations.
As I said originally, I'm really glad VCs are pouring money into this, but I'd easily make a bet that in 5 years that LLMs will be nowhere near human-level intelligence on some tasks, especially where novel discovery is required.
Without some raw reasoning (maybe Neuro-symbolic is the answer maybe not) capacity, LLM won't be enough. Reasoning is super tough because its not as easy as predicting the next most likely token.
Doesn't really imply let's just do more LLMs.
Why Tel Aviv in Israel ?
A couple years??
I guess the "mountain" is the key. "Safe" alone is far from being a product. As for the current LLM, Id even question how valuable "safe" can be.
We'll be able to generate most of Chat GPT4o's capabilities locally on affordable hardware including "unsafe" and "unaligned" data as the noise-to-qubits is drastically reduced meaning smaller quantized models that can run on good enough hardware.
We'll see a huge reduction in price and inference times within two years and whatever SSI is trained on won't be economically viable to recoup that $1B investment guaranteed.
all depends on GPT-5's performance. Right now Sonnet 3.5 is the best but theres nothing really ground breaking. SSI's success will depend on how much uplift it can provide over GPT-5 which already isn't expected to be significant leap beyond GPT4
Ilya proved himself as a leader, scientist, and engineer over the past decade with OpenAI for creating break-through after break-through that no one else had.
He’s raised enough to compete at the level of Grok, Claude, et al.
He’s offering investors a pure play AGI investment, possibly one of the only organizations available to do so.
Who else would you give $1B to pursue that?
That’s how investors think. There are macro trends, ambitious possibilities on the through line, and the rare people who might actually deliver.
A $5B valuation is standard dilation, no crazy ZIRP style round here.
If you haven’t seen investing at this scale in person it’s hard to appreciate that capital allocation just happens with a certain number of zeros behind it & some people specialize in making the 9 zero decisions.
Yes, it’s predicated on his company being worth more than $500B at some point 10 years down the line.
If they build AGI, that is a very cheap valuation.
Think how ubiquitous Siri, Alexa, chatGPT are and how terrible/not useful/wrong they’ve been.
There’s not a significant amount of demand or distribution risk here. Building the infrastructure to use smarter AI is the tech world’s obsession globally.
If AGI works, in any capacity or at any level, it will have a lot of big customers.
AGI assumes exponential, preferably infinite and continuous improvement, something unseen before in business or nature.
Neither siri nor Alexa were sold as AGI and neither alone come close to a $1B product. gpt and other LLMs has quickly become a commodity, with AI companies racing to the bottom for inference costs.
I don’t really see the plan, product wise.
Moreover you say: > Ilya proved himself as a leader, scientist, and engineer over the past decade with OpenAI for creating break-through after break-through that no one else had.
Which is absolutely true, but that doesn’t imply more breakthroughs are just around the corner, nor does the current technology suggest AGI is coming.
VCs are willing to take a $1B bet on exponential growth with a 500B upside.
Us regular folk see that and are dumbfounded because AI is obviously not going to improve exponentially forever (literally nothing in the observed universe does) and you can already see the logarithmic improvement curve. That’s where the dismissive attitude comes from.
Sure, but it doesn't have to continue forever to be wildly profitable. If it can keep the exponential growth running for another couple of rounds, that's enough to make everyone involved rich. No-one knows quite where the limit is, so it can reasonably be worth a gamble.