So while wolfgang42 wasn't there when Ulbricht was actually arrested, their realization created a vivid mental image of the event unfolding in that space, which made the story feel more immersive.
In short: they were reading about an old event, but it happened to occur in the same spot they were sitting at that moment. Hope that clears it up!
Glad that ChatGPT, probably like GP themselves, is a visualizer and actually can create a "vivid mental image" of something. For those of us with aphantasia, that is not a thing. Myself, I too was mighty confused by the text, which read literally like a time travel story, and was only missing a cat and tomorrow's newspaper.
He clearly states that he was reading an article, he uses past tense verbs when referring to Ross, and to the events spelled out in the article. If you somehow thought that he could be reading an article that ostensibly has to be describing a past event as he was seeing it in real time that is a logic flaw on you.
It has nothing to do with what you can or cannot visualize. All you have to do is ask yourself could he have been reading an article about Ross’s arrest while watching it? Since nobody can violate the causality of space time the answer is no.
This isn’t just you this is everybody in this thread who is reading this and going this is a little confusing. No it’s very clearly him speaking about a past experience reading an article about a past event.
> Then Ulbricht walked into the public library and sat down at the table directly in front of me
The problem is that two past events are being described, so tense alone cannot distinguish them. Cut the readers some slack; the writing could have been better.
"Then when Ulbricht..."
That "then" always does a lot of heavy lifting in English prose.
On a similar note--I've noticed that HN comments are often overwrought, like the commenter is trying to sound smarter than they actually are but just end up muddling what they're trying to say.
Perhaps these things are connected.
Maybe in 10 years we can blame poor reading comprehension on having a decade of computers reading for us. But it’s a bit early for that.
We have a multitude of immediate distractions now.
Books build richer worlds & ideas. But without learning to love books very early in life, which takes a lot of uninterrupted time, they don’t come naturally to most.
I used to read a few books a week, virtually every week. Sometimes two or three in a long day and some night. I still read a lot daily, interesting and useful things in short form. But finding time to read books seems to have become more difficult.
> Having this tableau unexpectedly unfold right in front of my eyes
And the metaphor / tense shift caught me by surprise and made my eyes retrace to the beginning. I still got it, but there was a little bit of comprehension whiplash as I hit that bump in the road.
In some ways, we're treated to an experience like the author's as we hit that sentence, so in that sense it's clever writing. On the other hand, maybe too clever for a casual web forum instead of, say, a letter.
I don’t think we can infer anythin about how LLMs think based on this.
I made a similar mistake on the original comment as you (I read it as "Ulbricht returned to the cafe, he actually sat down right in front of me while I was reading the story about his previous arrest here, and that's when I realised it was the same place"), and also thought you were saying that you think ChatGPT has a visual "imagination" inside.
(I don't know if it does or doesn't, but given the "o" in "4o" is supposed to make it multi-modal, my default assumption is that 4o can visualise things… but then, that's also my default assumption about humans, and you being aphantasic shows this is not necessarily so).
We needn't act like they share some grand enlightenment. It's just not well expressed. ChatGPT's output is also frequently not well expressed and not well thought out.
Two, I have aphantasia and didn't picture anything. I got it the first time without any confusion.
Are you seriously asking ChatGPT to read things for you? No wonder your reading comprehension is cooked. Don't blame aphantasia.
Generatove AI has all but solved the Frame Problem.
Those expressions where intractable bc of the impossibility to represent in logic all the background knowledge that is required to understand the context.
It turns out, it is possible to represent all that knowledge in compressed form, with statistical summarisation applied to humongous amounts of data and processing power, unimaginable back then; this puts the knowledge in reach of the algorithm processing the sentence, which is thus capable of understanding the context.
The problem turned out to be that some people got so fixated on formal logic they apparently couldn't spot that their own mind does not do any kind of symbolic reasoning unless forced to by lots of training and willpower.
The brain has infinite potentials, however only finite resolves. So you can only play a finite number of moves in a game of infinite infinities.
Individual minds have varying mental technology, our mental technologies change and adapt to challenges (not always in real time.) thus these infinite configurations create new potentials that previously didn’t exist in the realm of potential without some serious mental vectoring.
Get it? You were just so sure of yourself you canceled your own infinite potentials!
Remember, it’s only finite after it happens. Until then it’s potential.
No, it doesn't. The brain has a finite number of possible states to be in. It's an absurdly large amount of states, but it is finite. And, out of those absurd but finite number of possible states, only a tiny fraction correspond to possible states potentially reachable by a functioning brain. The rest of them are noise.
Not to mention, it's highly unlikely anything at that low a level matters to the functioning of a brain - at a functional level, physical states have to be quantized hard to ensure reliability and resistance against environmental noise.
Potential is resolving into state in the moment of now!
Be grateful, not scornful that it all collapses into state (don’t we all like consistency?), that is not however what it “is”. It “is” potential continuously resolving. The masterwork that is the mind is a hyoerdimensional and extradimentional supercomputer (that gets us by yet goes mostly squandered). Our minds and peripherals can manipulate, break down, and remake existential reality in the likeness of our own images. You seem to complain your own image is soiled by your other inputs or predispositions.
Sure, it’s a lot of work yet that’s what this whole universe thing runs on. Potential. State is what it collapses into in the moment of “now”.
And you’re right, continuity is an illusion. Oops.
The rules for translation are themselves the result of intelligence; when the thought experiment is made real (I've seen an example on TV once), these rules are written down by humans, using human intelligence.
A machine which itself generates these rules from observation has at least the intelligence* that humans applied specifically in the creation of documents expressing the same rules.
That a human can mechanically follow those same rules without understanding them, says as much and as little as the fact that the DNA sequences within the neurones in our brains are not themselves directly conscious of higher level concepts such as "why is it so hard to type 'why' rather than 'wju' today?" despite being the foundation of the intelligence process of natural selection and evolution.
* well, the capability — I'm open to the argument that AI are thick due to the need for so many more examples than humans need, and are simply making up for it by being very very fast and squeezing the equivalent of several million years of experiences for a human into a month of wall-clock time.
Minds shuffle information. Including about themselves.
Paper with information being shuffled by rules exhibiting intelligence and awareness of “self” is just ridiculously inefficient. Not inherently less capable.