Hacker News new | past | comments | ask | show | jobs | submit
Isn’t it at least equally likely that one would be more prone to confusion if one was a visual thinker?

I don’t think we can infer anythin about how LLMs think based on this.

Right. I'm not claiming the LLM has visual imagination - I suspect that OP has it, and that ChatGPT was trained on enough text from visual thinkers implicitly conveying their experience of the world, that it's now able to correctly interpret writing like that of OP's.
It's a strange feeling, watching the AI get better at language comprehension than me.

I made a similar mistake on the original comment as you (I read it as "Ulbricht returned to the cafe, he actually sat down right in front of me while I was reading the story about his previous arrest here, and that's when I realised it was the same place"), and also thought you were saying that you think ChatGPT has a visual "imagination" inside.

(I don't know if it does or doesn't, but given the "o" in "4o" is supposed to make it multi-modal, my default assumption is that 4o can visualise things… but then, that's also my default assumption about humans, and you being aphantasic shows this is not necessarily so).

As a visual thinker myself, I was also confused by how the story was presented. ChatGPT did better than me.
loading story #42791530