I think I've amended that thought. They are not necessarily lacking in intelligence. I hypothesize that LLMs pick up on optimism and pessimism among other sentiments in the incoming prompt: someone prompting with no hope that the result will be useful end up with useless garbage output and vice versa.
Gung-ho AI advocates overlook problems and seem to focus more on the potential they see for the future, giving everything a nice rose tint.
Pessimists will focus on the problems they encounter and likely not put in as much effort to get the results they want, so they likely see worse results than they might have otherwise achieved and worse than what the optimist saw.
It's pretty clear that people think greenfield projects can constantly be slopified and that AI will always be able to dig them another logical connection, so it doesn't matter which abstraction the AI chose this time; it can always be better.
This is akin to people who think we can just keep using oil to fuel technological growth because it'll some how improve the ability of technology to solve climate problems.
It's akin to the techno capitalist cult of "effective altruism" that assumes there's no way you could f'up the world that you can't fix with "good deeds"
There's a lot of hidden context in evaluating the output of LLMs, and if you're just looking at todays success, you'll come away with a much different view that if you're looking at next year's.
Optimism is only then, in this case, that you believe the AI will keep getting more powerful that it'll always clean up todays mess.
I call this techno magic, indistinguishable from religious 'optimism'