Hacker News new | past | comments | ask | show | jobs | submit
The GitHub issue is AI generated. In my experience triaging these in other projects, you can’t really trust anything in them without verifying. The users will make claims and then the AI will embellish to make them sound more important and accurate.
> AI will embellish to make them sound more important and accurate.

Did you mean than accurate rather than and accurate? Having a more accurate issue description only sounds like a good thing to me

Making them look more accurate is not the same as being more accurate, and llms are pretty good at the former.

Imagine a user had a vague idea or something that is broken, then the LLM will choose to interpret his comment for what it thinks is the most likely actual underneath problem, without actually checking anything.

“Seem important and accurate” is correct. It doesn’t imply actual accuracy, the llm will just use figures that resemble an actual calculation, hiding they are wild guesses.

I’ve run into the issue trying to use Claude to instrument and analyze some code for performance. It would make claims like “around 500mb ram are being used in this allocation” without evidence.

I read that as "make them sound more important and accurate than they actually are".
To make them sound more accurate.