Hacker News new | past | comments | ask | show | jobs | submit
Using LLM is perfect for writing documentation which is something I always had problems with it.
As someone who has dealt with projects with AI-generated documentation... I can't really say I agree. Good documentation is terse, efficiently communicating the essential details. AI output is soooooooo damn verbose. What should've been a paragraph becomes a giant markdown file. I like reading human-written documentation, but AI-slop documentation is so tedious I just bounce right off.

Plus, when someone wrote the documentation, I can ask the author about details and they'll probably know since they had enough domain expertise and knowledge of the code to explain anything that might be missing. I can't trust you to know anything about the code you had an AI generate and then had an AI write documentation for.

Then there's the accuracy issue. Any documentation can always be inaccurate and it can obviously get outdated with time, but at least with human-authored documentation, I can be confident that the content at some point matched a person's best understanding of the topic. With AI, no understanding is involved; it's just probabilistically generated text, we've all hopefully seen LLMs generate plausible-sounding but completely wrong text enough to somewhat doubt their output.

loading story #47443631
This immediately invalidates a software or technical project for me. The value of documentation isn't the output alone, but the act of documenting it by a person or people that understand it.

I have done a lot of technical writing in my career, and documenting things is exactly where you run into the worst design problems before they go live.