Hacker News new | past | comments | ask | show | jobs | submit
I found it useful to preface with

* this section written by me typing on keyboard *

* this section produced by AI *

And usually both exist in document and lengthy communications. This gets what I wanted across with exactly my intention and then I can attach 10x length worth of AI appendix that would be helpful indexing and references.

> attach 10x length worth of AI appendix that would be helpful indexing and references.

Are references helpful when they're generated? The reader could've generated them themselves. References would be helpful if they were personal references of stuff you actually read and curated. The value then would be getting your taste. References from an AI may well be good-looking nonsense.

"The user could have written the code themselves"

Yes, sometimes this is true, but not always.

Note, it's not one prompt (there aren't really "one prompt" any more, prompt engineering is such a 2023-2024 thing), or purely unreviewed output. It's curated output that was created by AI but iterated with me since it goes with and has to match my intention. And most of the time I don't directly prompt the agent any more. I go through a layer of agent management that inject more context into the agent that actually work on it.

I agree wholeheartedly, I don’t see any balance in the effort someone dedicated to generating text vs me consuming it. If you feel there’s further insight to be gained by an llm, give me the prompt, not the output. Any communication channel reflects a balance of information content flowing and we are still adjusting to the proper etiquette.