Hacker News new | past | comments | ask | show | jobs | submit
Sorry, it went over my head a bit, you read about his arrest while he was being arrested?
He was being arrested in the article, not IRL. When I say “Ulbricht walked into the public library and sat down at the table directly in front of me” I mean that I read

> He went... past the periodicals and reference desk, beyond the romance novels, and settled in at a circular table near science fiction, on the second floor... in a corner, with a view out the window and his back toward the wall.

and realized that I was in the Glen Park public library, at a circular table near science fiction on the second floor, in a corner with my back to the window, and facing directly towards where the article had just said he had sat.

I see so you accidentally retraced his footsteps from years prior and then realized it as you were reading about it.
> He was being arrested in the article, not IRL.

So the article lied that he was arrested?

loading story #42792114
Then he realized that he was Ross Ulbricht all along.
That’s because they are describing the inner workings of their visualization systems.

They saw him walk in because he was where it happened. The image of Ross, and others, was in mind, however.

I had the same confusion initially, interestingly chat GPT gets it:

So while wolfgang42 wasn't there when Ulbricht was actually arrested, their realization created a vivid mental image of the event unfolding in that space, which made the story feel more immersive.

In short: they were reading about an old event, but it happened to occur in the same spot they were sitting at that moment. Hope that clears it up!

> their realization created a vivid mental image of the event unfolding in that space, which made the story feel more immersive.

Glad that ChatGPT, probably like GP themselves, is a visualizer and actually can create a "vivid mental image" of something. For those of us with aphantasia, that is not a thing. Myself, I too was mighty confused by the text, which read literally like a time travel story, and was only missing a cat and tomorrow's newspaper.

Legitimately and I say this was absolutely no shade intended. This is a reading comprehension problem, nothing to do with aphantasia.

He clearly states that he was reading an article, he uses past tense verbs when referring to Ross, and to the events spelled out in the article. If you somehow thought that he could be reading an article that ostensibly has to be describing a past event as he was seeing it in real time that is a logic flaw on you.

It has nothing to do with what you can or cannot visualize. All you have to do is ask yourself could he have been reading an article about Ross’s arrest while watching it? Since nobody can violate the causality of space time the answer is no.

This isn’t just you this is everybody in this thread who is reading this and going this is a little confusing. No it’s very clearly him speaking about a past experience reading an article about a past event.

I realised what was going on, but I did a double-take at:

> Then Ulbricht walked into the public library and sat down at the table directly in front of me

The problem is that two past events are being described, so tense alone cannot distinguish them. Cut the readers some slack; the writing could have been better.

Done for effect: it felt to the OP as if it was the present so the writing conveys that, while elsewhere making it clear the arrest was not the present.
To follow the tense and delivery of the previous sentence, it would have been clearest to say

"Then when Ulbricht..."

That "then" always does a lot of heavy lifting in English prose.

I am as baffled at the responses and appreciated this explanation as it was helpful to me to work on my communication style and expresses a lot of similar frustrations I have. Like what is actually going on here? this isn’t shade at anyone, I just feel like people are losing some fundamental ability to deduce from context what they are reading. it’s doubly concerning because people immediately reach to an AI/LLM to explain it for them, which cannot possibly be helping the first problem.
Agree. This entire thread is weird. How do so many people in this thread have such obvious reading comprehension issues?

On a similar note--I've noticed that HN comments are often overwrought, like the commenter is trying to sound smarter than they actually are but just end up muddling what they're trying to say.

Perhaps these things are connected.

If an LLM clears up a misunderstanding, I am having trouble seeing that as a bad thing.

Maybe in 10 years we can blame poor reading comprehension on having a decade of computers reading for us. But it’s a bit early for that.

Who will think if LLM is doing all the thinking?
The problem is that people already have piss-poor reading comprehension. Relying LLMs to help them is going to make it worse than it already is.
I wonder what is going on? I’ve noticed this getting worse for a long time to the point I’m not sure it’s my imagination anymore. I usually like to lambast whole word reading as a complete failure in the american school system that contributes to this, but I think it’s likely something else. Shorter attention spans?
Long form reading is dying.

We have a multitude of immediate distractions now.

Books build richer worlds & ideas. But without learning to love books very early in life, which takes a lot of uninterrupted time, they don’t come naturally to most.

I used to read a few books a week, virtually every week. Sometimes two or three in a long day and some night. I still read a lot daily, interesting and useful things in short form. But finding time to read books seems to have become more difficult.

I do think the comment had something about how it was written that made it hard to follow. I understood the first sentence. But then I got to

> Having this tableau unexpectedly unfold right in front of my eyes

And the metaphor / tense shift caught me by surprise and made my eyes retrace to the beginning. I still got it, but there was a little bit of comprehension whiplash as I hit that bump in the road.

In some ways, we're treated to an experience like the author's as we hit that sentence, so in that sense it's clever writing. On the other hand, maybe too clever for a casual web forum instead of, say, a letter.

Agree this is a consequence of people reading too fast and reacting.
{"deleted":true,"id":42887725,"parent":42790715,"time":1738331989,"type":"comment"}
Isn’t it at least equally likely that one would be more prone to confusion if one was a visual thinker?

I don’t think we can infer anythin about how LLMs think based on this.

Right. I'm not claiming the LLM has visual imagination - I suspect that OP has it, and that ChatGPT was trained on enough text from visual thinkers implicitly conveying their experience of the world, that it's now able to correctly interpret writing like that of OP's.
It's a strange feeling, watching the AI get better at language comprehension than me.

I made a similar mistake on the original comment as you (I read it as "Ulbricht returned to the cafe, he actually sat down right in front of me while I was reading the story about his previous arrest here, and that's when I realised it was the same place"), and also thought you were saying that you think ChatGPT has a visual "imagination" inside.

(I don't know if it does or doesn't, but given the "o" in "4o" is supposed to make it multi-modal, my default assumption is that 4o can visualise things… but then, that's also my default assumption about humans, and you being aphantasic shows this is not necessarily so).

As a visual thinker myself, I was also confused by how the story was presented. ChatGPT did better than me.
You could also say that ChatGPT erred similarly to the original writer, who was unclear and misleading about events.

We needn't act like they share some grand enlightenment. It's just not well expressed. ChatGPT's output is also frequently not well expressed and not well thought out.

There's many more ways to err than to get something right. ChatGPT getting OP right where many people here didn't tells us it's more likely that there is a particular style of writing/thinking that is not obvious to everyone, but ChatGPT can identify and understand, rather than just both OP and ChatGPT accidentally making exactly the same error.
loading story #42793789
One, ChatGPT isn't a "visualizer."

Two, I have aphantasia and didn't picture anything. I got it the first time without any confusion.

Are you seriously asking ChatGPT to read things for you? No wonder your reading comprehension is cooked. Don't blame aphantasia.

Reducing any judgment out of your comment, you have to admit that the commenter's action was a successful comprehension strategy they learned from and can use in the future without chatgpt.
Okay, that's actually pretty wild. I totally misunderstood too, but the response from the "AI" does indeed "clear it up" for me. A bit surprised actually, but then again, I suppose I shouldn't be, since language is what those "large language models" are all about after all... :)
Indeed. But their is something surprising here, however. people like chomsky would present examples like this for decades as untracktable by any algorithm, and as a proof that language is a uniquely human thing. they went as far as to claim that humans have a special language organ, somewhere in their brain perhaps. turns out, a formula exists, it is just very very large.
> chomsky would present examples like this for decades as untracktable by any algorithm, and as a proof that language is a uniquely human thing

Generatove AI has all but solved the Frame Problem.

Those expressions where intractable bc of the impossibility to represent in logic all the background knowledge that is required to understand the context.

It turns out, it is possible to represent all that knowledge in compressed form, with statistical summarisation applied to humongous amounts of data and processing power, unimaginable back then; this puts the knowledge in reach of the algorithm processing the sentence, which is thus capable of understanding the context.

Which should be expected, because since human brain is finite, it follows that it's either possible to do it, or the brain is some magic piece of divine substrate to which laws of physics do not apply.

The problem turned out to be that some people got so fixated on formal logic they apparently couldn't spot that their own mind does not do any kind of symbolic reasoning unless forced to by lots of training and willpower.

That’s not what it means at all. You threw a monkey in your own wrench.

The brain has infinite potentials, however only finite resolves. So you can only play a finite number of moves in a game of infinite infinities.

Individual minds have varying mental technology, our mental technologies change and adapt to challenges (not always in real time.) thus these infinite configurations create new potentials that previously didn’t exist in the realm of potential without some serious mental vectoring.

Get it? You were just so sure of yourself you canceled your own infinite potentials!

Remember, it’s only finite after it happens. Until then it’s potential.

> The brain has infinite potentials

No, it doesn't. The brain has a finite number of possible states to be in. It's an absurdly large amount of states, but it is finite. And, out of those absurd but finite number of possible states, only a tiny fraction correspond to possible states potentially reachable by a functioning brain. The rest of them are noise.

You are wrong! Confidently wrong at that. Distribution of potential, not number of available states. Brain capacity and capability is scalar and can retune itself at the most fundamental levels.
As far as we know, universe is discrete at the very bottom, continuity is illusory, so that's still finite.

Not to mention, it's highly unlikely anything at that low a level matters to the functioning of a brain - at a functional level, physical states have to be quantized hard to ensure reliability and resistance against environmental noise.

You’ve tricked yourself into a narrative.

Potential is resolving into state in the moment of now!

Be grateful, not scornful that it all collapses into state (don’t we all like consistency?), that is not however what it “is”. It “is” potential continuously resolving. The masterwork that is the mind is a hyoerdimensional and extradimentional supercomputer (that gets us by yet goes mostly squandered). Our minds and peripherals can manipulate, break down, and remake existential reality in the likeness of our own images. You seem to complain your own image is soiled by your other inputs or predispositions.

Sure, it’s a lot of work yet that’s what this whole universe thing runs on. Potential. State is what it collapses into in the moment of “now”.

And you’re right, continuity is an illusion. Oops.

Huge amounts of data and processing power are arguably the foundation for the "Chinese room" thought experiment.
I never bought into Searle's argument with the Chinese room.

The rules for translation are themselves the result of intelligence; when the thought experiment is made real (I've seen an example on TV once), these rules are written down by humans, using human intelligence.

A machine which itself generates these rules from observation has at least the intelligence* that humans applied specifically in the creation of documents expressing the same rules.

That a human can mechanically follow those same rules without understanding them, says as much and as little as the fact that the DNA sequences within the neurones in our brains are not themselves directly conscious of higher level concepts such as "why is it so hard to type 'why' rather than 'wju' today?" despite being the foundation of the intelligence process of natural selection and evolution.

* well, the capability — I'm open to the argument that AI are thick due to the need for so many more examples than humans need, and are simply making up for it by being very very fast and squeezing the equivalent of several million years of experiences for a human into a month of wall-clock time.

I didn’t buy that argument at all either.

Minds shuffle information. Including about themselves.

Paper with information being shuffled by rules exhibiting intelligence and awareness of “self” is just ridiculously inefficient. Not inherently less capable.

I don’t think I understand this entirely. The point of the thought experiment is to assume the possibility of the room and consider the consequences. How it might be achievable in practice doesn’t alter this
The room is possible because there's someone inside with a big list of rules of what Chinese characters to reply with. This represents the huge amount of data processing and statistical power. When the thought expt was created, you could argue that the room was impossible, so the experiment was meaningless. But that's no longer the case.
if you go and s/Chinese Room/LLM against any of the counter arguments to the thought experiment how many of them does it invalidate?
I'm not sure I'm following you. My comment re Chinese room was that parent said the data processing we now have was unimaginable back in the day. In fact, it was imaginable - the Chinese room imagined it.
I was responding to the point that the thought experiment was meaningless.
{"deleted":true,"id":42791099,"parent":42789037,"time":1737540251,"type":"comment"}
Yeah, whoosh for me.