Hacker News new | past | comments | ask | show | jobs | submit
> their realization created a vivid mental image of the event unfolding in that space, which made the story feel more immersive.

Glad that ChatGPT, probably like GP themselves, is a visualizer and actually can create a "vivid mental image" of something. For those of us with aphantasia, that is not a thing. Myself, I too was mighty confused by the text, which read literally like a time travel story, and was only missing a cat and tomorrow's newspaper.

Legitimately and I say this was absolutely no shade intended. This is a reading comprehension problem, nothing to do with aphantasia.

He clearly states that he was reading an article, he uses past tense verbs when referring to Ross, and to the events spelled out in the article. If you somehow thought that he could be reading an article that ostensibly has to be describing a past event as he was seeing it in real time that is a logic flaw on you.

It has nothing to do with what you can or cannot visualize. All you have to do is ask yourself could he have been reading an article about Ross’s arrest while watching it? Since nobody can violate the causality of space time the answer is no.

This isn’t just you this is everybody in this thread who is reading this and going this is a little confusing. No it’s very clearly him speaking about a past experience reading an article about a past event.

I realised what was going on, but I did a double-take at:

> Then Ulbricht walked into the public library and sat down at the table directly in front of me

The problem is that two past events are being described, so tense alone cannot distinguish them. Cut the readers some slack; the writing could have been better.

Done for effect: it felt to the OP as if it was the present so the writing conveys that, while elsewhere making it clear the arrest was not the present.
To follow the tense and delivery of the previous sentence, it would have been clearest to say

"Then when Ulbricht..."

That "then" always does a lot of heavy lifting in English prose.

I am as baffled at the responses and appreciated this explanation as it was helpful to me to work on my communication style and expresses a lot of similar frustrations I have. Like what is actually going on here? this isn’t shade at anyone, I just feel like people are losing some fundamental ability to deduce from context what they are reading. it’s doubly concerning because people immediately reach to an AI/LLM to explain it for them, which cannot possibly be helping the first problem.
Agree. This entire thread is weird. How do so many people in this thread have such obvious reading comprehension issues?

On a similar note--I've noticed that HN comments are often overwrought, like the commenter is trying to sound smarter than they actually are but just end up muddling what they're trying to say.

Perhaps these things are connected.

If an LLM clears up a misunderstanding, I am having trouble seeing that as a bad thing.

Maybe in 10 years we can blame poor reading comprehension on having a decade of computers reading for us. But it’s a bit early for that.

Who will think if LLM is doing all the thinking?
The problem is that people already have piss-poor reading comprehension. Relying LLMs to help them is going to make it worse than it already is.
I wonder what is going on? I’ve noticed this getting worse for a long time to the point I’m not sure it’s my imagination anymore. I usually like to lambast whole word reading as a complete failure in the american school system that contributes to this, but I think it’s likely something else. Shorter attention spans?
Long form reading is dying.

We have a multitude of immediate distractions now.

Books build richer worlds & ideas. But without learning to love books very early in life, which takes a lot of uninterrupted time, they don’t come naturally to most.

I used to read a few books a week, virtually every week. Sometimes two or three in a long day and some night. I still read a lot daily, interesting and useful things in short form. But finding time to read books seems to have become more difficult.

I do think the comment had something about how it was written that made it hard to follow. I understood the first sentence. But then I got to

> Having this tableau unexpectedly unfold right in front of my eyes

And the metaphor / tense shift caught me by surprise and made my eyes retrace to the beginning. I still got it, but there was a little bit of comprehension whiplash as I hit that bump in the road.

In some ways, we're treated to an experience like the author's as we hit that sentence, so in that sense it's clever writing. On the other hand, maybe too clever for a casual web forum instead of, say, a letter.

Agree this is a consequence of people reading too fast and reacting.
{"deleted":true,"id":42887725,"parent":42790715,"time":1738331989,"type":"comment"}
Isn’t it at least equally likely that one would be more prone to confusion if one was a visual thinker?

I don’t think we can infer anythin about how LLMs think based on this.

Right. I'm not claiming the LLM has visual imagination - I suspect that OP has it, and that ChatGPT was trained on enough text from visual thinkers implicitly conveying their experience of the world, that it's now able to correctly interpret writing like that of OP's.
It's a strange feeling, watching the AI get better at language comprehension than me.

I made a similar mistake on the original comment as you (I read it as "Ulbricht returned to the cafe, he actually sat down right in front of me while I was reading the story about his previous arrest here, and that's when I realised it was the same place"), and also thought you were saying that you think ChatGPT has a visual "imagination" inside.

(I don't know if it does or doesn't, but given the "o" in "4o" is supposed to make it multi-modal, my default assumption is that 4o can visualise things… but then, that's also my default assumption about humans, and you being aphantasic shows this is not necessarily so).

As a visual thinker myself, I was also confused by how the story was presented. ChatGPT did better than me.
You could also say that ChatGPT erred similarly to the original writer, who was unclear and misleading about events.

We needn't act like they share some grand enlightenment. It's just not well expressed. ChatGPT's output is also frequently not well expressed and not well thought out.

There's many more ways to err than to get something right. ChatGPT getting OP right where many people here didn't tells us it's more likely that there is a particular style of writing/thinking that is not obvious to everyone, but ChatGPT can identify and understand, rather than just both OP and ChatGPT accidentally making exactly the same error.
Why would that be more likely? Seems like OP and ChatGPT (which is just many people of different skill levels) might easily make the same failure to communicate. Many failures of ChatGPT are failures to communicate or to convey structured thinking.
Because out of all possible communication failures OP and ChatGPT could make, them both making the exact same error, in a way that makes the two errors cancel out, is extremely unlikely.
One, ChatGPT isn't a "visualizer."

Two, I have aphantasia and didn't picture anything. I got it the first time without any confusion.

Are you seriously asking ChatGPT to read things for you? No wonder your reading comprehension is cooked. Don't blame aphantasia.

Reducing any judgment out of your comment, you have to admit that the commenter's action was a successful comprehension strategy they learned from and can use in the future without chatgpt.