Hacker News new | past | comments | ask | show | jobs | submit
LLMs do not actually make anything better for anyone. You have to constantly correct them. It's like having a junior coder under your wing that never learns from its mistakes. I can't imagine anyone actually feeling productive using one to work.
I don't know what to think about comments like this. So many of them come from accounts that are days or at most weeks old. I don't know if this is astroturfing, or you really are just a new account and this is your experience.

As somebody who has been coding for just shy of 40 years and has gone through the actual pain on learning to run a high level and productive dev team, your experience does not match mine. Even great devs will forget some of the basics and make mistakes and I wish every junior (hell even seniors) were as effective as the LLMs are turning out to be. Put the LLM in the hands of a seasoned engineer who also has the skills to manage projects and mentor junior devs and you have a powerful accelerator. I'm seeing the outcome of that every day on my team. The velocity is up AND the quality is up.

> The velocity is up AND the quality is up.

This is not my experience on a team of experienced SWEs working on a product worth 100m/year.

Agents are a great search engine for a codebase and really nice for debugging but anytime we have it write feature code it makes too many mistakes. We end up spending more time tuning the process than it takes to just write the code AND you are trading human context with agent context that gets wiped.

I can't speak to your experience. I can only speak to mine.

We've spent years reducing old debt and modernizing our application and processes. The places where we've made that investment are where we are currently seeing the additional acceleration. The places where we haven't are still stuck in the mud, but per your "search engine for a codebase" comment our engineers are starting to engage with systems they would not have previously touched.

There are areas for sure where LLMs would fall down. That's where we need the experts to guide them and restructure the project so that it is LLM friendly (which also just happens to be the same things that make the app better for human engineers).

And I'm serious about the quality comment. Maybe there's a difference in how your team is using the tools, but I have individuals on my team who are learning to leverage the tools to create better outputs, not just pump out features faster.

I'm not saying LLMs solve everything, FAR from it. But it's giving a master weapon to an experienced warrior.

loading story #47398041
Your experience matches mine too. Experienced devs are increasing their output while maintaining quality. I'm personally writing better-quality code than before because it's trival to tell AI to refactor or rename something. I care about good code, but I'm also lazy, so I have my Claude skills set up to have AI do it for me. (Of course, I always keep the human in the loop and review the outputs.)

You said that you're restructuring the project to be LLM friendly, which also makes the app better for humans. I 100% agree with this. Code that is unreadable and unmaintainable for humans is much more difficult for AI to understand. I think companies that practiced or prioritized code hygiene will be ahead of the game when it comes to getting good results with agentic AI.

loading story #47396380
Who would I possibly be astroturfing for? The entire industry is all-in on LLMs.
I can't speak for you specifically, it's just a trend I'm seeing and unfortunately your 2 day old account falls into that bucket. There's a lot of people who have a lot to lose or who are very afraid of what LLMs will do. There's plenty of incentive to do this.

I would be curious to see if I'm just imaging this or it really is a trend.

At the same time you have astro-turfing from LLM producers though, so...
Agreed, but I find that astro-turfing far more obvious.
Agreed.

It's clear to me as a more seasoned engineer that I can prompt the LLM to do what I want (more or less) and it will catch generally small errors in my approach before I spend time trying them. I don't often feel like I ended up in a different place than I would have on my own. I just ended up there faster, making fewer concessions along the way.

I do worry I'll become lazy and spoiled. And then lose access to the LLM and feel crippled. That's concerning. I also worry that others aren't reading the patches the AI generates like I am before opening PRs, which is also concerning.

This is a very reasonable comment. IMO it's a falacy to take into consideration the age of an account especially when it is subjective experience.
A junior engineer who might spend a few hours trying to understand why you added a mutex, reading blogs on common patterns, might come back with a question about why you locked it twice in one thread in some case you didn't consider. Just because someone lacks the experience and knowledge you have, doesn't mean they cannot learn and be helpful. Sometimes those with the most to learn are the most willing to put the hours in trying.
loading story #47396792
You need to learn to use the tool better, clearly, if you have such an unhinged take as this.
No to be fair I do see what he's saying. I see a major difference between the more expensive models and the cheaper ones. The cheaper (usually default) ones make mistakes all the damn time. You can be as clear as day with them and they simply don't have the context window or specs to make accurate, well reasoned desicions and it is a bit like having a terrible junior work alongside you, fresh out of university.
Emphasis on the "terrible" part of the junior.

The cheaper models can't be taught or improved due to their inherit limitations, which makes it a huge pain to even try with even the simplest of tasks. Perpetually, no matter your instruction file(s).

I agree. The more expensive models I must admit have impressed me, but sometimes they take so long and are so expensive you might as well do it yourself. That being said if you're feeling particularly lazy there is now a "do it for me" button built into code editors, but until perhaps 2035 this technology is still somewhat pedestrian compared to what it could be in the future.
It's not unhinged at all, it's a lack of imagination on both of your parts.
The only people who use LLMs "as a tool" are those who are incapable of doing it without using it at all.
> The only people who use LLMs "as a tool" are those who are incapable of doing it without using it at all.

Do you mean that? It's clearly false, but I don't want to waste time gathering famous-person counterexamples if you already know it's a huge exaggeration at best.

No true scotsman, right?