Hacker News new | past | comments | ask | show | jobs | submit
I don't know what to think about comments like this. So many of them come from accounts that are days or at most weeks old. I don't know if this is astroturfing, or you really are just a new account and this is your experience.

As somebody who has been coding for just shy of 40 years and has gone through the actual pain on learning to run a high level and productive dev team, your experience does not match mine. Even great devs will forget some of the basics and make mistakes and I wish every junior (hell even seniors) were as effective as the LLMs are turning out to be. Put the LLM in the hands of a seasoned engineer who also has the skills to manage projects and mentor junior devs and you have a powerful accelerator. I'm seeing the outcome of that every day on my team. The velocity is up AND the quality is up.

> The velocity is up AND the quality is up.

This is not my experience on a team of experienced SWEs working on a product worth 100m/year.

Agents are a great search engine for a codebase and really nice for debugging but anytime we have it write feature code it makes too many mistakes. We end up spending more time tuning the process than it takes to just write the code AND you are trading human context with agent context that gets wiped.

I can't speak to your experience. I can only speak to mine.

We've spent years reducing old debt and modernizing our application and processes. The places where we've made that investment are where we are currently seeing the additional acceleration. The places where we haven't are still stuck in the mud, but per your "search engine for a codebase" comment our engineers are starting to engage with systems they would not have previously touched.

There are areas for sure where LLMs would fall down. That's where we need the experts to guide them and restructure the project so that it is LLM friendly (which also just happens to be the same things that make the app better for human engineers).

And I'm serious about the quality comment. Maybe there's a difference in how your team is using the tools, but I have individuals on my team who are learning to leverage the tools to create better outputs, not just pump out features faster.

I'm not saying LLMs solve everything, FAR from it. But it's giving a master weapon to an experienced warrior.

I also agree. In fact, I was hitting a limit on my ability to ship a really difficult feature and after I became good at using Claude, I was able to finally get it done. The last mile was really hard but I had documented things very well so the LLM was able to fly through the bugs, write tests that I dare say are too difficult for humans to design since they require keeping in your head a large amount of context ( distributed computing is really hard) which is where I was hitting my limit. I now think that I can only do the easy stuff by hand, anything serious requires me to get a LLM to at least verify, but of course I just let it do things while I explain the high level vision and the sorts of tests I expect it to have.
Your experience matches mine too. Experienced devs are increasing their output while maintaining quality. I'm personally writing better-quality code than before because it's trival to tell AI to refactor or rename something. I care about good code, but I'm also lazy, so I have my Claude skills set up to have AI do it for me. (Of course, I always keep the human in the loop and review the outputs.)

You said that you're restructuring the project to be LLM friendly, which also makes the app better for humans. I 100% agree with this. Code that is unreadable and unmaintainable for humans is much more difficult for AI to understand. I think companies that practiced or prioritized code hygiene will be ahead of the game when it comes to getting good results with agentic AI.

Whenever actual studies are made about LLM coding they always show that LLM coding is a net loss in quality and delivery speed.

(They are good as coder psychotherapy tho.)

Well, things are changing so fast those studies are going to be out of date. And I have no doubt some people are experiencing a net loss while others are not. We need to pry apart why some people are having success with it and others aren't, and build on top of what's working.
loading story #47403027
Who would I possibly be astroturfing for? The entire industry is all-in on LLMs.
I can't speak for you specifically, it's just a trend I'm seeing and unfortunately your 2 day old account falls into that bucket. There's a lot of people who have a lot to lose or who are very afraid of what LLMs will do. There's plenty of incentive to do this.

I would be curious to see if I'm just imaging this or it really is a trend.

At the same time you have astro-turfing from LLM producers though, so...
Agreed, but I find that astro-turfing far more obvious.
Agreed.

It's clear to me as a more seasoned engineer that I can prompt the LLM to do what I want (more or less) and it will catch generally small errors in my approach before I spend time trying them. I don't often feel like I ended up in a different place than I would have on my own. I just ended up there faster, making fewer concessions along the way.

I do worry I'll become lazy and spoiled. And then lose access to the LLM and feel crippled. That's concerning. I also worry that others aren't reading the patches the AI generates like I am before opening PRs, which is also concerning.

This is a very reasonable comment. IMO it's a falacy to take into consideration the age of an account especially when it is subjective experience.