Hacker News new | past | comments | ask | show | jobs | submit
Wow, that's such a drastic different experience than mine. May I ask what toolset are you using? Are you limited to using your home grown "AcmeCode" or have full access to Claude Code / Cursor with the latest and greatest models, 1M context size, full repo access?

I see it generating between 50% to 90% accuracy in both small and large tasks, as in the PRs it generates range between being 50% usable code that a human can tweak, to 90% solution (with the occasional 100% wow, it actually did it, no comments, let's merge)

I also found it to be a skillset, some engineers seem to find it easier to articulate what they want and some have it easier to think while writing code.

I used to think that the people who keep saying (in March 2026) that AI does not generate good code are just not smart and ask stupid prompts.

I think I've amended that thought. They are not necessarily lacking in intelligence. I hypothesize that LLMs pick up on optimism and pessimism among other sentiments in the incoming prompt: someone prompting with no hope that the result will be useful end up with useless garbage output and vice versa.

loading story #47392157
loading story #47394251
loading story #47391530
Don’t know why you’re getting downvoted, this is a fascinating hypothesis and honestly super believable. It makes way more sense than the intuitive belief that there’s actually something under the human skin suit understanding any of this code.
loading story #47392720