Hacker News new | past | comments | ask | show | jobs | submit
Right. I'm not claiming the LLM has visual imagination - I suspect that OP has it, and that ChatGPT was trained on enough text from visual thinkers implicitly conveying their experience of the world, that it's now able to correctly interpret writing like that of OP's.
It's a strange feeling, watching the AI get better at language comprehension than me.

I made a similar mistake on the original comment as you (I read it as "Ulbricht returned to the cafe, he actually sat down right in front of me while I was reading the story about his previous arrest here, and that's when I realised it was the same place"), and also thought you were saying that you think ChatGPT has a visual "imagination" inside.

(I don't know if it does or doesn't, but given the "o" in "4o" is supposed to make it multi-modal, my default assumption is that 4o can visualise thingsā€¦ but then, that's also my default assumption about humans, and you being aphantasic shows this is not necessarily so).

As a visual thinker myself, I was also confused by how the story was presented. ChatGPT did better than me.
You could also say that ChatGPT erred similarly to the original writer, who was unclear and misleading about events.

We needn't act like they share some grand enlightenment. It's just not well expressed. ChatGPT's output is also frequently not well expressed and not well thought out.

There's many more ways to err than to get something right. ChatGPT getting OP right where many people here didn't tells us it's more likely that there is a particular style of writing/thinking that is not obvious to everyone, but ChatGPT can identify and understand, rather than just both OP and ChatGPT accidentally making exactly the same error.
Why would that be more likely? Seems like OP and ChatGPT (which is just many people of different skill levels) might easily make the same failure to communicate. Many failures of ChatGPT are failures to communicate or to convey structured thinking.
Because out of all possible communication failures OP and ChatGPT could make, them both making the exact same error, in a way that makes the two errors cancel out, is extremely unlikely.