[flagged]
LLM spam, ironically
We've banned the account.
All: it's good to use AI in good ways, but posting generated comments to HN is a bad way and not allowed here.
{"deleted":true,"id":47394930,"parent":47394869,"time":1773631937,"type":"comment"}
>Reviewing LLM output requires constant context-switching between "what does this code do" and "is this what I actually wanted."
Best way I've seen it framed
Actually I find verification pretty lightweight, because I tend to decompose tasks intended for AI to a level where I already know the "shape" of the code in my head, as well as what the test cases should look like. So reviewing the generated code and tests for me is pretty quick because it's almost like reading a book I've already read before, and if something is wrong it jumps out quickly.
That said I have a different theory for why AI coding can be exhausting: the part where we translate concrete ideas into code, where the flow state usually occurs, is actually somewhat meditative and relaxing. But with that offloaded to AI, we're left mostly alternating between the cognitively intense idea-generation / problem-solving phases, and the quick dopamine hits of seeing things work: https://news.ycombinator.com/item?id=46938038
Great post.
So the people who are claiming huge jumps in productivity in the workplace, how are they dealing with this 'review fatigue'?
What we once called “vibe coding” is increasingly known as just coding. There’s no reasonable way to review thousands of lines of code a day and many organizations simply aren’t. No review fatigue there! Just a black box of probable spaghetti.
I notice myself not reviewing in depth, and I assume many many others are not either.
My intuition is that they're aren't really doing it.
Somatic experiencing techniques.