Hacker News new | past | comments | ask | show | jobs | submit
The LLM presents a perverse incentive here - It is used for perceived efficiency gains, most of which would be consumed by the act of rewriting and redrafting. The alienness of the thoughts in the document is also non-condusive to this; Reading a long document about something you think you know but did not write is exhausting and mentally painful - This is why code review has such relatively poor results.

Quite frankly, while having an LLM draft and rewriting it would be okay, I do not believe it is reasonable to expect that to ever happen. It will be either like high school paper plagarism (Just change around some of the sentences and rephrase it bro), or it will simply not even get that much. It is unreasonable with what we know about human psychology to expect that "Human-Rewrites of LLM drafts", at the level that the human contributes something, are maintainable and scalable; Most people psychologically can't put in that effort.

>The LLM presents a perverse incentive here - It is used for perceived efficiency gains, most of which would be consumed by the act of rewriting and redrafting.

It might give efficiency gains for the writer, but the reader has to read the slop and try to guess at what it was intending to communicate and weed out "hallucinations". That's a big loss of efficiency for the reader.