Hacker News new | past | comments | ask | show | jobs | submit
I used to think that the people who keep saying (in March 2026) that AI does not generate good code are just not smart and ask stupid prompts.

I think I've amended that thought. They are not necessarily lacking in intelligence. I hypothesize that LLMs pick up on optimism and pessimism among other sentiments in the incoming prompt: someone prompting with no hope that the result will be useful end up with useless garbage output and vice versa.

Exactly. You have to manifest at a high vibrational frequency.
Thanks for the laugh.
That sounds a lot more like confirmation bias than any real effect on the AI's output.

Gung-ho AI advocates overlook problems and seem to focus more on the potential they see for the future, giving everything a nice rose tint.

Pessimists will focus on the problems they encounter and likely not put in as much effort to get the results they want, so they likely see worse results than they might have otherwise achieved and worse than what the optimist saw.

That's a valid sounding argument. However many people with no strong view either way are producing functional, good code with AI daily, and the original context of this thread is about someone who has never been able to produce anything committable. Many, many real world experiences show something excellent and ready to go from a simple one shot.
This is kinda like that thing about how psychic mediums supposedly can't medium if there's a skeptic in the room. Goes to show that AI really is a modern-day ouija board.
loading story #47391863
Don’t know why you’re getting downvoted, this is a fascinating hypothesis and honestly super believable. It makes way more sense than the intuitive belief that there’s actually something under the human skin suit understanding any of this code.
It's probably more to do with the intelligence required to know when a specific type of code will yield poor future coding integrations and large scale implementation.

It's pretty clear that people think greenfield projects can constantly be slopified and that AI will always be able to dig them another logical connection, so it doesn't matter which abstraction the AI chose this time; it can always be better.

This is akin to people who think we can just keep using oil to fuel technological growth because it'll some how improve the ability of technology to solve climate problems.

It's akin to the techno capitalist cult of "effective altruism" that assumes there's no way you could f'up the world that you can't fix with "good deeds"

There's a lot of hidden context in evaluating the output of LLMs, and if you're just looking at todays success, you'll come away with a much different view that if you're looking at next year's.

Optimism is only then, in this case, that you believe the AI will keep getting more powerful that it'll always clean up todays mess.

I call this techno magic, indistinguishable from religious 'optimism'