Agreed, but I can't imagine that everybody here on HN that's defending LLMs so... misguidedly and obviously ignoring observable reality... are financially interested in LLMs' success, can they?
I’m financially interested in Anthropic being successfull since it means their prices are more likely to go down, or for their models to get (even) better.
Honestly, if you don’t think it works for you, that’s fine with me. I just feel the dismissive attitude is weird since it’s so incredibly useful to me.
Given they're a VC backed company rushing to claim market share and apparently selling well below cost, it's not clear that that will be the case. Compare the ride share companies for one case where prices went up once they were successful.
Why is it weird, exactly? I don't write throwaway projects so the mostly one-off nature of LLM code generation is not useful to me. And I'm not the only one either.
If you can give examples of incredible usefulness then that can advance the discussion. Otherwise it's just us trying to out-shout each other, and I'm not interested in that.
It's a message board frequented by extremely tech-involved people. I'd expect the vast majority of people here to have some financial interest in LLMs - big-tech equity, direct employment, AI startup, AI influencer, or whatever.
loading story #42437691
Orrrrr more simpler than imagining a vast conspiracy is that your observations just don't match theirs. If you're writing, say, C# with some esoteric libraries using CoPilot, it's easy to see it as glorified auto-complete that hallucinates to the point of being unusable because there's not enough training data. If you're using Claude with Aider to write a webpage using NextJS, you'll see it as a revolution in programming because of how much of that is on stack overflow. The other side of it is just how much the code needs to work, vs has to look good. If you're used to engineering the most beautifully organized code before shipping once, vs shipped some gross hacks you're ashamed of shipping, and the absolute quality of the code is secondary to it working and passing tests, then the generated code having an extra variable or crap that doesn't get used isn't as big an indightment of LLM-assisted programming that you believe it to be.
Why do you think your observable reality is the only one, and the correct one at that? Looking at your mindset, as well as the objections to the contrary (and their belief that they're correct), the truth is likely somewhere in-between the two extremes.
loading story #42440546