Hacker News new | past | comments | ask | show | jobs | submit
This being ycombinator and as such ostensibly has one or two (if not more) VCs as readers/commentators … can someone please tell me how these companies that are being invested in in the AI space are going to make returns on the money invested? What’s the business plan? (I’m not rich enough to be in these meetings) I just don’t see how the returns will happen.

Open source LLMs exist and will get better. Is it just that all these companies will vie for a winner-take-all situation where the “best” model will garner the subscription? Doesn’t OpenAI make some substantial part of the revenue for all the AI space? I just don’t see it. But I don’t have VC levels of cash to bet on a 10x or 100x return so what do I know?

VCs at the big/mega funds make most of their money from fees, they don't actually care as much about the potential portfolio investment exits 10-15 years from now. What they care MOST about is the ability to raise another fund in 2-3 years, so they can milk more fees from LPs. i.e. 2% fee PER YEAR on a 5bn fund is a lot of guaranteed risk-free money.

To be able to achieve that is entirely dependent on two things:

1) deploying capital in the current fund on 'sexy' ideas so they can tell LPs they are doing their job

2) paper markups, which they will get, since Ilya will most definitely be able to raise another round or two at a higher valuation. even if it eventually goes bust or gets sold at cost.

With 1) and 2), they can go back to their existing fund LPs and raise more money for their next fund and milk more fees. Getting exits and carry is just the cherry on top for these megafund VCs.

loading story #41453480
loading story #41454204
loading story #41453877
loading story #41503420
So the question I have is, who are these LP's and why are they demanding funds go into "sexy" ideas?

I mean it probably depends on the LP and what is their vision. Not all apples are red, come in many varieties and some for cider others for pies. Am I wrong?

The person you're responding to has a very sharp view of the profession. imo it's more nuanced, but not very complicated. In Capitalism, capital flows, that's how it works, capital should be deployed. Larges pools of capital are typically put to work (this in itself is nuanced). The "put to work" is various types of deployment of the capital. The simplest way to look at this is risk. Lets take pension funds because we know they invest in VC firms as LPs. Here* you can find an example of the breakdown of the investments made by this very large pension fund. You'll note most of it is very boring, and the positions held related to venture are tiny, they would need a crazy outsized swing from a VC firm to move any needles. Given all that, it traditionally* has made no sense to bet "down there" (early stage) - mostly because the expertise are not there, and they don't have the time to learn tech/product. Fee's are the cost of capital deployment at the early stages, and from what I've been told talking to folks who work at pension funds, they're happy to see VCs take swing.

but.. it really depends heavily on the LP base of the firm, and what the firm raised it's fund on, it's incredibly difficult to generalize. The funds I'm involved around as an LP... in my opinion they can get as "sexy" as they like because I buy their thesis, then it's just: get the capital deployed!!!!

Most of this is all a standard deviation game, not much more than that.

https://www.otpp.com/en-ca/investments/our-advantage/our-per... https://www.hellokoru.com/

I can't understand one thing: why are pension funds so fond of risky capital investments? What's the problem with allocating that money into shares of a bunch of old, stable companies and getting a small but steady income? I can understand if a few people with lots of disposable money are looking for some suspense and thrills, using venture capital like others use a casino. But what's the point for pension funds, which face significant problems if they lose the managed money in a risky venture?
loading story #41482047
loading story #41457019
loading story #41453741
I'm not a VC so maybe you don't care what I think, I'm not sure.

Last night as my 8yo was listening to childrens audio books going to sleep, she asked me to have it alternate book A then B then A then B.

I thought, idunno maybe I can work out a way to do this. Maybe the app has playlists and maaaaaaaaaaybe has a way to set a playlist on repeat. Or maybe you just can't do this in the app at all. I just sat there and switched it until she fell asleep, it wasn't gonna be more than 2 or 3 anyway, and so it's kind of a dumb example.

But here's the point: Computers can process language now. I can totally imagine her telling my phone to do that and it being able to do so, even if she's the first person ever to want it to do that. I think the bet is that a very large percentage of the world's software is going to want to gain natural language superpowers. And that this is not a trivial undertaking that will be achieved by a few open source LLMs. It will be a lot of work for a lot of people to make this happen, as such a lot of money will be made along the way.

Specifically how will this unfold? Nobody knows, but I think they wanna be deep in the game when it does.

How is this any different than the (lack of) business model of all the voice assistants?

How good does it have to be, how many features does it have to have, how accurate does its need to be.. in order for people to pay anything? And how much are people actually willing to spend against the $XX Billion of investment?

Again it just seems like "sell to AAPL/GOOG/MSFT and let them figure it out".

> How is this any different than the (lack of) business model of all the voice assistants?

Voice assistants do a small subset of the things you can already do easily on your phone. Competing with things you can already do easily on your phone is very hard; touch interfaces are extremely accessible, in many ways more accessible than voice. Current voice assistants only being able to do a small subset of that makes them not really very valuable.

And we aren't updating and rewriting all the world's software to expose its functionality to voice assistants because the voice assistant needs to be programmed to do each of those things. Each possible interaction must be planned and implemented invidually.

I think the bet is that we WILL be doing substantially that, updating and rewriting all the software, now that we can make them do things that are NOT easy to do with a phone or with a computer. And we can do so without designing every individual interaction; we can expose the building blocks and common interactions and LLMs may be able to map much more specific user desires onto those.

loading story #41453973
loading story #41452106
> How is this any different than the (lack of) business model of all the voice assistants?

Feels very different to me. The dominant ones are run by Google, Apple, and Amazon, and the voice assistants are mostly add-on features that don't by themselves generate much (if any) revenue (well, aside from the news that Amazon wants to start charging for a more advanced Alexa). The business model there is more like "we need this to drive people to our other products where they will spend money; if we don't others will do it for their products and we'll fall behind".

Sure, these companies are also working on AI, but there are also a bunch of others (OpenAI, Anthropic, SSI, xAI, etc.) that are banking on AI as their actual flagship product that people and businesses will pay them to use.

Meanwhile we have "indie" voice assistants like Mycroft that fail to find a sustainable business model and/or fail to gain traction and end up shutting down, at least as a business.

I'm not sure where this is going, though. Sure, some of these AI companies will get snapped up by bigger corps. I really hope, though, that there's room for sustainable, independent businesses. I don't want Google or Apple or Amazon or Microsoft to "own" AI.

Hard to see normies signing up for monthly subs to VC funded AI startups when a surprisingly large % still are resistant to paying AAPL/GOOG for email/storage/etc. Getting a $10/mo uplift for AI functionality to your iCloud/GSuite/Office365/Prime is a hard enough sell as it stands.

And again this against CapEx of something like $200B means $100/year per user is practically rounding to 0.

Not to mention the OpEx to actually run the inference/services on top ongoing.

You'd be very surprised at how much they're raking in from the small sliver of people who do pay. It only seems small just because of how much more they make from other things. If you have a billion users, a tiny percentage of paying users is still a gazillion dollars. Getting to a billion users is the hard part. Theyre betting theyll figure how to monetize all those eyeballs when they get there.
The voice assistants are too basic. As folks have said before, nobody trusts Alexa to place orders. But if Alexa was as competent as an intelligent & capable human secretary, you would never interact with Amazon.com again.
loading story #41479701
Congrats on 10k karma :)

One could ask: how is this different from automatic call centers? (eg “for checking accounts, push 1…”) well, people hate those things. If one could create an automated call center that people didn’t hate, it might replace a lot of people.

loading story #41455465
loading story #41452517
loading story #41451365
loading story #41454851
loading story #41453732
In general VC is about investing in a large number of companies that mostly fail, and trying to weight the portfolio to catch the few black swans that generate insane returns. Any individual investment is likely to fail, but you want to have a thesis for 1) why it could theoretically be a black swan, and 2) strong belief in the team to execute. Here's a thesis for both of these for SSI:

1. The black swan: if AGI is achievable imminently, the first company to build it could have a very strong first mover advantage due to the runaway effect of AI that is able to self-improve. If SSI achieves intelligence greater than human-level, it will be faster (and most likely dramatically cheaper) for SSI to self-improve than anyone external can achieve, including open-source. Even if open-source catches up to where SSI started, SSI will have dramatically improved beyond that, and will continue to dramatically improve even faster due to it being more intelligent.

2: The team. Basically, Ilya Sutskever was one of the main initial brains behind OpenAI from a research perspective, and in general has contributed immensely to AI research. Betting on him is pretty easy.

I'm not surprised Ilya managed to raise a billion dollars for this. Yes, I think it will most likely fail: the focus on safety will probably slow it down relative to open source, and this is a crowded space as it is. If open source gets to AGI first, or if it drains the market of funding for research labs (at least, research labs disconnected from bigtech companies) by commoditizing inference — and thus gets to AGI first by dint of starving its competitors of oxygen — the runaway effects will favor open-source, not SSI. Or if AGI simply isn't achievable in our lifetimes, SSI will die by failing to produce anything marketable.

But VC isn't about betting on likely outcomes, because no black swans are likely. It's about black swan farming, which means trying to figure out which things could be black swans, and betting on strong teams working on those.

loading story #41452736
loading story #41452617
loading story #41453369
These VC’s are already lining up the exit as they are investing. They all sit on the boards of major corps and grease the acquisitions all the way through. The hit rate of the top funds is all about connections and enablement.
loading story #41452242
loading story #41451380
If Ilya is sincere in his belief about safe superintelligence being within reach in a decade or so, and the investors sincerely believe this as well, then the business plan is presumably to deploy the superintelligence in every field imaginable. "SSI" in pharmaceuticals alone would be worth the investment. It could cure every disease humanity has ever known, which should give it at least a $2 trillion valuation. I'm not an economist, but since the valuation is $5bn, it stands to reason that evaluators believe there is at most a 1 in 400 chance of success?
loading story #41448894
loading story #41452337
loading story #41453744
loading story #41448881
> going to make returns on the money invested

Why do you think need to make money ? VC are not PEs for a reason. a VC have to find high risk/ high reward opportunities for their LPs they don't need to make financial sense, that is what LPs use Private Equity for.

Think of it as no different than say sports betting , you would like to win sure, but you don't particularly expect to do so, or miss that money all that much for us it $10 for the LP behind the VC it is $1B.

There is always few billions every year that chases the outlandish fad, because in the early part of the idea lifecycle it not possible to easily differentiate what is actually good and what is garbage.

Couple of years before it was all crypto, is this $1B any worse than say roughly same amount Sequoia put in FTX or all the countless crypto startups that got VC money ? Few before that it was kind of all Softbank from WeWork to dozen other high profile investments.

The fad and fomo driven part of the secto garners the maximum news and attention, but it is not the only VC money. Real startups with real businesses get funded as well with say medium risk/medium rewrard by VCs everyday but the news is not glamorous to be covered like this one.

> Doesn’t OpenAI make some substantial part of the revenue for all the AI space? I just don’t see it.

So...

OpenAI's business model may or may not represent a long term business model. ATT, it just the simplest commercial model, and it happened to work for them given all the excitement and a $20 price point that takes advantage of that.

The current "market for ai" is a sprout. It's form doesn't tell you much about the form of the eventual plant.

I don't think the most ambitious VC investments are thought of in concrete market share terms. They are just assuming/betting that an extremely large "AI market" will exist in the future, and are trying to invest in companies that will be in position to dominate that market.

For all they know, their bets could pay off by dominating therapy, entertainment, personal assistance or managing some esoteric aspect of bureaucracy. It's all quite ethereal, at this point.

loading story #41455173
You don’t need a business plan to get AI investment, you just need to talk a good game about how AGI is around the corner and consequently the safety concerns are so real.

I would say the investors want to look cool so invest in AI projects. And AI people look cool when they predict some improbable hellscape to hype up a product that all we can see so far can regurgitate (stolen) human work it has seen before in a useful way. I’ve never seen it invent anything yet and I’m willing to bet that search space is too dramatically large to build algorithms that can do it.

The TMV (Total Market Value) of solving AGI is infinity. And furthermore, if AGI is solved, the TMV of pretty much everything else drops to zero.

The play here is to basically invest in all possible players who might reach AGI, because if one of them does, you just hit the infinite money hack.

And maybe with SSI you've saved the world too.

loading story #41451516
loading story #41451471
loading story #41452019
So then the investment thesis hinges on what the investor thinks AGI’s chances are. 1/100 1/1M 1/1T?

What if it never pans out is there infrastructure or other ancillary tech that society could benefit from?

For example all the science behind the LHC, or bigger and better telescopes: we might never find the theory of everything but the tech that goes into space travel, the science of storing and processing all that data, better optics etc etc are all useful tech

It's more game theory. Regardless of the chances of AGI, if you're not invested in it, you will lose everything if it happens. It's more like a hedge on a highly unlikely event. Like insurance.

And we already seeing a ton of value in LLMs. There are lots of companies that are making great use of LLMs and providing a ton of value. One just launched today in fact: https://www.paradigmai.com/ (I'm an investor in that). There are many others (some of which I've also invested in).

I too am not rich enough to invest in the foundational models, so I do the next best thing and invest in companies that are taking advantage of the intermediate outputs.

If you want safe investment you could always buy land. AGI won't be able to make more of that.
If ASI arrives we'll need a fraction of the land we use already. We'll all disappear into VR pods hooked to a singularity metaverse and the only sustenance we'll need is some Soylent Green style sludge that the ASI will make us believe tastes like McRib(tm).
ASI may be interested in purchasing your parcel of land for two extra sludges though
We can already make more land. See Dubai for example. And with AGI, I suspect we could rapidly get to space travel to other planets or more efficient use of our current land.

In fact I would say that one of the things that goes to values near zero would be land if AGI exists.

Perhaps but my mental model is humans will end up like landed gentry / aristos with robot servants to make stuff and will all want mansions with grounds, hence there will be a lot of land demand.
Still those AGIs Servers need land
loading story #41452293
As humans move into space this statement becomes less true
i think the investment strategies change when you dump these astronomical sums into a company. it's not like roulette where you have a fixed probability of success and you figure out how much to bet on it -- dumping in a ton of cash can also increase the probability of success so it becomes more of a pay-to-win game
loading story #41449711
loading story #41452668
loading story #41451433
loading story #41453173
loading story #41454021
loading story #41449191
loading story #41449236
loading story #41458073
loading story #41452351
loading story #41451342
loading story #41450784
loading story #41452001
loading story #41451661
I think current models have demonstrated an advanced capacity to navigate “language space”. If we assume “software UI space” is a subset of the language space that is used to guide our interactions with software, then it’s fair to assume models will eventually be able to control operating systems and apps as well as the average human. I think the base case on value creation is a function of the productivity gain that results from using natural language instead of a user interface. So how much time do you spend looking at a screen each day and what is your time worth? And then there’s this option that you get: what if models can significantly exceed the capabilities of the average human?

Conservative math: 3B connected people x $0.50/day “value” x 364 days = $546B/yr. You can get 5% a year risk free, so let’s double it for the risk we’re taking. This yields $5T value. Is a $1B investment on someone who is a thought leader in this market an unreasonable bet?

Agree with your premise, but the value creation math seems off. $0.50/day might become reality for some percentage of US citizens. But not for 3B people around the world.

There's also the issue of who gets the benefit of making people more efficient. A lot of that will be in the area of more efficient work, which means corporations get more work done with the same amount of employees at the same level of salary as before. It's a tough argument to make that you deserve a raise because AI is doing more work for you.

loading story #41455419
Likely the business plan is multiple seed rounds each at greater principals but lower margins so that the early investors can either sell their shares or wait, at greater risk, for those shares to liquidate. The company never has to make money for the earliest investors to make money so long as sufficient interest is generated for future investors, and AI is a super hype train.

Eventually, on a long enough timeline, all these tech companies with valuations greater than 10 billion eventually make money because they have saturated the market long enough to become unavoidable.

I also don't understand it. If AGI is actually reached, capital as we know it basically becomes worthless. The entire structure of the modern economy and the society surrounding it collapses overnight.

I also don't think there's any way the governments of the world let real AGI stay in the hands of private industry. If it happens, governments around the world will go to war to gain control of it. SSI would be nationalized the moment AGI happened and there's nothing A16Z could do about it.

loading story #41450950
loading story #41451010
loading story #41450552
loading story #41455814
loading story #41449471
loading story #41450246
loading story #41450964
loading story #41450913
I think the wishful end goal is AGI.

Picture something 1,000 smarter than a human. The potential value is waaaay bigger than any present company or even government.

Probably won’t happen. But, that’s the reasoning.

loading story #41456954
My guess (not a VC) is they’ll sell ‘private’ models where safety is a priority: healthcare, government, finance, the EU…
loading story #41452801
loading story #41448723
> companies that are being invested in in the AI space are going to make returns on the money invested

By selling to the "dumb(er) money" - if a Softbank / Time / Yahoo appears they can have it, if not you can always find willing buyers in an IPO.

Current investors just need the co to be valued at $50B on the next round (likely, given fomo and hype) to make a 10X gain.

Actually converting it to cash? That doesn't happen anymore. Everyone just focuses on IRR and starts the campaign for Fund II.

loading story #41452957
At least this time, people are actually asking the question.

NVDA::AI

CSCO::.COM

loading story #41452759
loading story #41452486
While I get the cynicism (and yes, there is certainly some dumb money involved), it’s important to remember that every tech company that’s delivered 1000X returns was also seen as ridiculously overhyped/overvalued in its early days. Every. Single. One. It’s the same story with Amazon, Apple, Google, Facebook/Meta, Microsoft, etc. etc.

That’s the point of venture capital; making extremely risky bets spread across a wide portfolio in the hopes of hitting the power law lottery with 1-3 winners.

Most funds will not beat the S&P 500, but again, that’s the point. Risk and reward are intrinsically linked.

In fact, due to the diversification effects of uncorrelated assets in a portfolio (see MPT), even if a fund only delivers 5% returns YoY after fees, that can be a great outcome for investors. A 5% return uncorrelated to bonds and public stocks is an extremely valuable financial product.

It’s clear that humans find LLMs valuable. What companies will end up capturing a lot of that value by delivering the most useful products is still unknown. Betting on one of the biggest names in the space is not a stupid idea (given the purpose of VC investment) until it actually proves itself to be in the real world.

loading story #41451628
loading story #41449195
loading story #41449275
> please tell me how these companies that are being invested in in the AI space are going to make returns on the money invested? What’s the business plan?

Not a VC, but I'd assume in this case the investors are not investing in a plausible biz plan, but in a group of top talent, especially given how early stage the company is at. The $5B valuation is really the valuation of the elite team in a arguably hyped market.

A lot of there ”investments” are probably in form of a credits to use on training compute from hyperscalars and other GPU compute data centers.

Look at previous such investments Microsoft and AWS have done in OpenAI and Anthropic.

They need use cases and customers for their initial investment for 750 billion dollars. Investing in the best people in the field is then of course a given.

It’s not that complicated. Your users pay a monthly subscription fee like they do with chatGPT or midjourney. At some point they’re hoping AI gets so good that anyone without access is at a severe disadvantage in society.
Sometimes it's not about returns but about transferring wealth and helping out friends. Happens all the time. The seed money will get out, all the rest of the money will get burned.
The "safe" part. It's a plan to drive the safety scare into a set of regulations that will create a moat, at which point you don't need to worry about open source models, or new competitors.
The company that builds the best LLM will reap dozens or hundreds of billions in reward. It’s that simple.

It has nothing to do with AGI and everything to do with being the first-party provider for Microsoft and the like.

Staking the territory in a new frontier.
The VCs probably assume that a pivot to military/surveillance/propaganda applications is possible if/when AGI fails.
For at least some of the investors, a successful exit doesn't require building a profitable business.
loading story #41448706
> how [...] return on the money invested? What’s the business plan?

I don't understand this question. How could even average-human-level AGI not be useful in business, and profitable, a million different ways? (you know, just like humans except more so?). Let alone higher-human-level, let alone moderately-super-human level, let alone exponential level if you are among the first? (And see Charles Stross, Accelerando, 2005 for how being first is not the end of the story.)

I can see one way for "not profitable" for most applications - if computing for AGI becomes too expensive, that is, AGI-level is too compute intensive. But even then that only eliminates some applications, and leaves all the many high-potential-profit ones. Starting with plain old finance, continuing with drug development, etc.

Open source LLMs exist. Just like lots of other open source projects - which have rarely prevented commercial projects from making money. And so far they are not even trying for AGI. If anything the open source LLM becomes one of the agent in the private AGI. But presumably 1 billion buys a lot of effort that the open source LLM can't afford.

A more interesting question is one of tradeoff. Is this the best way to invest 1 billion right now? From a returns point of view? But even this depends on how many billions you can round up and invest.

loading story #41454409
loading story #41446071
loading story #41446929
loading story #41449913
Getting funded by a16z is if anything a sign that the field is not hot anymore.
All money is green, regardless of level of sophistication. If you’re using investment firm pedigree as signal, gonna have a bad time. They’re all just throwin’ darts under the guise of skill (actor/observer|outcome bias; when you win, it is skill; when you lose, it was luck, broadly speaking).

> Indeed, one should be sophisticated themselves when negotiating investment to not be unduly encumbered by the unsophisticated. But let us not get too far off topic and risk subthread detachment.

Edit: @jgalt212: Indeed, one should be sophisticated themselves when negotiating investment to not be unduly encumbered by shades of the unsophisticated or potentially folks not optimizing for aligned interests. But let us not get too far off topic and risk subthread detachment. Feel free to cut a new thread for further discussion on the subject.

> All money is green, regardless of level of sophistication.

True, but most, if not all, money comes with strings attached.

loading story #41446096
loading story #41446023
loading story #41446341
Why is that?
Might be the almost securities fraud they were doing with crypto when it was fizzling out in 2022

Regardless, point is moot, money is money, and a16z's money isn't their money but other people's money

loading story #41458104
loading story #41454655
loading story #41451833
loading story #41449702
loading story #41446885
loading story #41447211
loading story #41446351
loading story #41446800
loading story #41445936
loading story #41445956
Lots of comments either defending this ("it's taking a chance on being the first to build AGI with a proven team") or saying "it's a crazy valuation for a 3 month old startup". But both of these "sides" feel like they miss the mark to me.

On one hand, I think it's great that investors are willing to throw big chunks of money at hard (or at least expensive) problems. I'm pretty sure all the investors putting money in will do just fine even if their investment goes to zero, so this feels exactly what VC funding should be doing, rather than some other common "how can we get people more digitally addicted to sell ads?" play.

On the other hand, I'm kind of baffled that we're still talking about "AGI" in the context of LLMs. While I find LLMs to be amazing, and an incredibly useful tool (if used with a good understanding of their flaws), the more I use them, the more that it becomes clear to me that they're not going to get us anywhere close to "general intelligence". That is, the more I have to work around hallucinations, the more that it becomes clear that LLMs really are just "fancy autocomplete", even if it's really really fancy autocomplete. I see lots of errors that make sense if you understand an LLM is just a statistical model of word/token frequency, but you would expect to never see these kinds of errors in a system that had a true understanding of underlying concepts. And while I'm not in the field so I may have no right to comment, there are leaders in the field, like LeCun, who have expressed basically the same idea.

So my question is, has Sutskever et al provided any acknowledgement of how they intend to "cross the chasm" from where we are now with LLMs to a model of understanding, or has it been mainly "look what we did before, you should take a chance on us to make discontinuous breakthroughs in the future"?

Ilya has discussed this question: https://www.youtube.com/watch?v=YEUclZdj_Sc
Thank you very much for posting! This is exactly what I was looking for.

On one hand, I understand what he's saying, and that's why I have been frustrated in the past when I've heard people say "it's just fancy autocomplete" without emphasizing the awesome capabilities that can give you. While I haven't seen this video by Sutskever before, I have seen a very similar argument by Hinton: in order to get really good at next token prediction, the model needs to "discover" the underlying rules that make that prediction possible.

All that said, I find his argument wholly unconvincing (and again, I may be waaaaay stupider than Sutskever, but there are other people much smarter than I who agree). And the reason for this is because every now and then I'll see a particular type of hallucination where it's pretty obvious that the LLM is confusing similar token strings even when their underlying meaning is very different. That is, the underlying "pattern matching" of LLMs becomes apparent in these situations.

As I said originally, I'm really glad VCs are pouring money into this, but I'd easily make a bet that in 5 years that LLMs will be nowhere near human-level intelligence on some tasks, especially where novel discovery is required.

Watching that video actually makes me completely unconvinced that SSI will succeed if they are hinging it on LLM...

He puts a lot of emphasis on the fact that 'to generate the next token you must understand how', when thats precisely the parlor trick that is making people lose their minds (myself included) with how effective current LLMs are. The fact that it can simulate some low-fidelity reality with _no higher-level understanding of the world_, using purely linguistic/statistical analysis, is mind-blowing. To say "all you have to do is then extrapolate" is the ultimate "draw the rest of the owl" argument.

> but I'd easily make a bet that in 5 years that LLMs will be nowhere near human-level intelligence on some tasks

I wouldn't. There are some extraordinarily stupid humans out there. Worse, making humans dumber is a proven and well-known technology.

I actually echo your exact sentiments. I don't have the street cred but watching him talk for the first few minutes I immediately felt like there is just no way we are going to get AGI with what we know today.

Without some raw reasoning (maybe Neuro-symbolic is the answer maybe not) capacity, LLM won't be enough. Reasoning is super tough because its not as easy as predicting the next most likely token.

>All that said, I find his argument wholly unconvincing (and again, I may be waaaaay stupider than Sutskever, but there are other people much smarter than I who agree). And the reason for this is because every now and then I'll see a particular type of hallucination where it's pretty obvious that the LLM is confusing similar token strings even when their underlying meaning is very different. That is, the underlying "pattern matching" of LLMs becomes apparent in these situations.

So? One of the most frustrating parts of these discussions is that for some bizzare reason, a lot of people have a standard of reasoning (for machines) that only exists in fiction or their own imaginations.

Humans have a long list of cognitive shortcomings. We find them interesting and give them all sorts of names like cognitive dissonance or optical illusions. But we don't currently make silly conclusions like humans don't reason.

The general reasoning engine that makes neither mistake nor contradiction or confusion in output or process does not exist in real life whether you believe Humans are the only intelligent species on the planet or are gracious enough to extend the capability to some of our animal friends.

So the LLM confuses tokens every now and then. So what ?

loading story #41450307
They might never work for novel discovery but that probably can be handled by outside loop or online (in-context) learning. The thing is that 100k or 1M context is a marketing scam for now.
loading story #41449663
loading story #41448301
loading story #41447707
loading story #41447745
loading story #41453193
loading story #41447634
loading story #41449792
loading story #41447841
loading story #41447445
"It will focus on building a small highly trusted team of researchers and engineers split between Palo Alto, California and Tel Aviv, Israel."

Why Tel Aviv in Israel ?

loading story #41446354
Because it's a startup hub, there is great engineering talent there, and the cost of living is lower than the US.
Cost of living is extremely high in Tel Aviv, but the rest is true.
loading story #41446675
loading story #41446640
loading story #41446614
loading story #41446639
loading story #41447069
loading story #41451319
loading story #41446946
loading story #41455025
loading story #41455418
“…a straight shot to safe superintelligence and in particular to spend a couple of years doing R&D on our product before bringing it to market," Gross said in an interview.”

A couple years??

loading story #41448498
well since it's no longer ok to just suck up anyone's data and train your AI, it will be a new challenge for them to avoid that pitfall. I can imagine it will take some time...
I believe the commenter is concerned about how _short_ this timeline is. Superintelligence in a couple years? Like, the thing that can put nearly any person at a desk out of a job? My instinct with unicorns like this is to say 'actually it'll be five years and it won't even work', but Ilya has a track record worth believing in.
what laws have actually changed that make it no longer okay?

we all know that openai did it

loading story #41448169
loading story #41448281
loading story #41448360
loading story #41448811
loading story #41447057
loading story #41446722
> Sutskever said his new venture made sense because he "identified a mountain that's a bit different from what I was working on."

I guess the "mountain" is the key. "Safe" alone is far from being a product. As for the current LLM, Id even question how valuable "safe" can be.

to be honestly from the way "safe" and "alignment" is perceived on r/LocalLLaMA in two years its not going to be very appealing.

We'll be able to generate most of Chat GPT4o's capabilities locally on affordable hardware including "unsafe" and "unaligned" data as the noise-to-qubits is drastically reduced meaning smaller quantized models that can run on good enough hardware.

We'll see a huge reduction in price and inference times within two years and whatever SSI is trained on won't be economically viable to recoup that $1B investment guaranteed.

all depends on GPT-5's performance. Right now Sonnet 3.5 is the best but theres nothing really ground breaking. SSI's success will depend on how much uplift it can provide over GPT-5 which already isn't expected to be significant leap beyond GPT4

loading story #41446486
loading story #41446008
loading story #41446415
loading story #41447987
loading story #41446577
loading story #41450968
loading story #41450894
loading story #41447543
loading story #41453605
loading story #41447170
loading story #41446484
loading story #41455217
loading story #41446076
loading story #41452628
loading story #41447730
loading story #41453888
loading story #41446981
loading story #41446361
loading story #41463614
loading story #41457247
loading story #41448076
loading story #41446599
loading story #41454784
loading story #41454878
loading story #41446195
loading story #41452474
loading story #41453932
loading story #41446534
loading story #41452722
loading story #41451304
loading story #41446027
Lots of dismissive comments here.

Ilya proved himself as a leader, scientist, and engineer over the past decade with OpenAI for creating break-through after break-through that no one else had.

He’s raised enough to compete at the level of Grok, Claude, et al.

He’s offering investors a pure play AGI investment, possibly one of the only organizations available to do so.

Who else would you give $1B to pursue that?

That’s how investors think. There are macro trends, ambitious possibilities on the through line, and the rare people who might actually deliver.

A $5B valuation is standard dilation, no crazy ZIRP style round here.

If you haven’t seen investing at this scale in person it’s hard to appreciate that capital allocation just happens with a certain number of zeros behind it & some people specialize in making the 9 zero decisions.

Yes, it’s predicated on his company being worth more than $500B at some point 10 years down the line.

If they build AGI, that is a very cheap valuation.

Think how ubiquitous Siri, Alexa, chatGPT are and how terrible/not useful/wrong they’ve been.

There’s not a significant amount of demand or distribution risk here. Building the infrastructure to use smarter AI is the tech world’s obsession globally.

If AGI works, in any capacity or at any level, it will have a lot of big customers.

All I’m saying is you used the word “if” a lot there.

AGI assumes exponential, preferably infinite and continuous improvement, something unseen before in business or nature.

Neither siri nor Alexa were sold as AGI and neither alone come close to a $1B product. gpt and other LLMs has quickly become a commodity, with AI companies racing to the bottom for inference costs.

I don’t really see the plan, product wise.

Moreover you say: > Ilya proved himself as a leader, scientist, and engineer over the past decade with OpenAI for creating break-through after break-through that no one else had.

Which is absolutely true, but that doesn’t imply more breakthroughs are just around the corner, nor does the current technology suggest AGI is coming.

VCs are willing to take a $1B bet on exponential growth with a 500B upside.

Us regular folk see that and are dumbfounded because AI is obviously not going to improve exponentially forever (literally nothing in the observed universe does) and you can already see the logarithmic improvement curve. That’s where the dismissive attitude comes from.

> literally nothing in the observed universe does

There are many things on earth that don't exist anywhere else in the universe (as far as we know). Life is one of them. Just think how unfathomably complex human brains are compared to what's out there in space.

Just because something doesn't exist anywhere in the universe doesn't mean that humans can't create it (or humans can't create a machine that creates something that doesn't exist anywhere else) even if it might seem unimaginably complex.

loading story #41462673
> AI is obviously not going to improve exponentially forever (literally nothing in the observed universe does)

Sure, but it doesn't have to continue forever to be wildly profitable. If it can keep the exponential growth running for another couple of rounds, that's enough to make everyone involved rich. No-one knows quite where the limit is, so it can reasonably be worth a gamble.

loading story #41462646
I’m curious if you’d be willing to share more of your personal context?

My intent is to be helpful. I’m unsure of how much additional context might be useful to you.

Investor math & mechanics is straight-forward: institutional funds & family offices want to get allocations in investors like a16z because they get to invest in deals that they could not otherwise invest in. The top VCs specialize in getting into deals that most investors will never get the opportunity to put money into. This is one of them.

For their Internal Rate of Return (IRR) to work out at least one investment needs to return 100x or more on the valuation. VCs today focus on placing bets where that calculation can happen. Most investors aren’t that confident in their ability to predict that, so they invest alongside lead investors who are. a16z is famous for that.

There are multiple companies worth $1T+ now, so this isn’t a fantasy investment. it’s a bet.

The bet doesn’t need to be that AGI continues to grow in power infinitely, it just needs to create a valuable company in roughly a ten year time horizon.

Many of the major tech companies today are worth more money than anyone predicted, including the founders (Amazon, Microsoft, Apple, Salesforce, etc.). An outlier win in tech can have incredible upside.

LLMs are far from commoditized yet, but the growth of the cloud proves you can make a fortune on the commoditization of tech. Commoditization is another way of saying “everyone uses this as a cost of doing business now.” Pretty great spot to land on.

My personal view is that AGI will deliver a post-product world, Eric Schmidt recently stated the same. Products are digital destinations humans need to go to in order to use a tool to create a result. With AGI you can get a “product” on the fly & AI has potentially very significant advantages in interacting with humans in new ways within existing products & systems, no new product required. MS Copilot is an early example.

It’s completely fine to be dismissive of new tech, it’s common even. What bring me you here?

I’m here on HN because I love learning from people who are curious about what is possible & are exploring it through taking action. Over a couple decades of tech trends it’s clear that tech evolves in surprising ways, most predictions eventually prove correct (though the degree of impact is highly variable), and very few people can imagine the correct mental model of what that new reality will be like.

I agree with Zuck:

The best way to predict the future is to build it.

"if" is the name of the game in investing.

you say you don't see it. fine. these investors do - thats why they are investing and you are not.

loading story #41449873
loading story #41447550
I'm also confused by the negativity on here. Ilya had a direct role in creating the algorithms and systems that created modern LLMs. He pioneered the first deep learning computer vision models.
Even with Ilya demonstrating his capabilities in those areas you mentioned, it seems like investors are simply betting on his track record, hoping he’ll replicate the success of OpenAI. This doesn’t appear to be an investment in solving a specific problem with a clear product-market fit, which is why the reception feels dismissive.
loading story #41456504
I repeatedly keep seeing praise for Ilyas achievements as a scientist and engineer, but until ChatGPT OpenAI was in the shadow of DeepMind, and to my knowledge (I might be wrong) he has not been that much involved with ChatGPT?

the whole LLM race seems deaccelerate, and all the hard problems about LLMs seems not do have had that much progress the last couple of years (?)

In my naaive view I think a guy like David Silver the creator/co-lead of Alpha-Zero deserves more praise, atleast as a leader/scientist. He even have lectures about Deep RL after doing AlphaGo: https://www.davidsilver.uk/teaching/

He has no LinkedIn and came straight from the game-dev industry before learning about RL.

I would put my money on him.

loading story #41447604
> If AGI works, in any capacity or at any level, it will have a lot of big customers.

This is wrong. The models may end up cheaply available or even free. The business cost will be in hosting and integration.

{"deleted":true,"id":41447835,"parent":41446476,"time":1725468458,"type":"comment"}
I have this rock here that might grant wishes. I will sell it to you for $10,000. Sure it might just be a rock, but if it grants wishes $10k is a very cheap price!
loading story #41446593
loading story #41447693
loading story #41446510
loading story #41445948
loading story #41445958
loading story #41478058
loading story #41446423
loading story #41447981
loading story #41448508
loading story #41446536
loading story #41445943
loading story #41447589