Hacker News new | past | comments | ask | show | jobs | submit
You could also say that ChatGPT erred similarly to the original writer, who was unclear and misleading about events.

We needn't act like they share some grand enlightenment. It's just not well expressed. ChatGPT's output is also frequently not well expressed and not well thought out.

There's many more ways to err than to get something right. ChatGPT getting OP right where many people here didn't tells us it's more likely that there is a particular style of writing/thinking that is not obvious to everyone, but ChatGPT can identify and understand, rather than just both OP and ChatGPT accidentally making exactly the same error.
Why would that be more likely? Seems like OP and ChatGPT (which is just many people of different skill levels) might easily make the same failure to communicate. Many failures of ChatGPT are failures to communicate or to convey structured thinking.
Because out of all possible communication failures OP and ChatGPT could make, them both making the exact same error, in a way that makes the two errors cancel out, is extremely unlikely.