Hacker News new | past | comments | ask | show | jobs | submit
Okay, that's actually pretty wild. I totally misunderstood too, but the response from the "AI" does indeed "clear it up" for me. A bit surprised actually, but then again, I suppose I shouldn't be, since language is what those "large language models" are all about after all... :)
Indeed. But their is something surprising here, however. people like chomsky would present examples like this for decades as untracktable by any algorithm, and as a proof that language is a uniquely human thing. they went as far as to claim that humans have a special language organ, somewhere in their brain perhaps. turns out, a formula exists, it is just very very large.
> chomsky would present examples like this for decades as untracktable by any algorithm, and as a proof that language is a uniquely human thing

Generatove AI has all but solved the Frame Problem.

Those expressions where intractable bc of the impossibility to represent in logic all the background knowledge that is required to understand the context.

It turns out, it is possible to represent all that knowledge in compressed form, with statistical summarisation applied to humongous amounts of data and processing power, unimaginable back then; this puts the knowledge in reach of the algorithm processing the sentence, which is thus capable of understanding the context.

Which should be expected, because since human brain is finite, it follows that it's either possible to do it, or the brain is some magic piece of divine substrate to which laws of physics do not apply.

The problem turned out to be that some people got so fixated on formal logic they apparently couldn't spot that their own mind does not do any kind of symbolic reasoning unless forced to by lots of training and willpower.

That’s not what it means at all. You threw a monkey in your own wrench.

The brain has infinite potentials, however only finite resolves. So you can only play a finite number of moves in a game of infinite infinities.

Individual minds have varying mental technology, our mental technologies change and adapt to challenges (not always in real time.) thus these infinite configurations create new potentials that previously didn’t exist in the realm of potential without some serious mental vectoring.

Get it? You were just so sure of yourself you canceled your own infinite potentials!

Remember, it’s only finite after it happens. Until then it’s potential.

> The brain has infinite potentials

No, it doesn't. The brain has a finite number of possible states to be in. It's an absurdly large amount of states, but it is finite. And, out of those absurd but finite number of possible states, only a tiny fraction correspond to possible states potentially reachable by a functioning brain. The rest of them are noise.

You are wrong! Confidently wrong at that. Distribution of potential, not number of available states. Brain capacity and capability is scalar and can retune itself at the most fundamental levels.
As far as we know, universe is discrete at the very bottom, continuity is illusory, so that's still finite.

Not to mention, it's highly unlikely anything at that low a level matters to the functioning of a brain - at a functional level, physical states have to be quantized hard to ensure reliability and resistance against environmental noise.

You’ve tricked yourself into a narrative.

Potential is resolving into state in the moment of now!

Be grateful, not scornful that it all collapses into state (don’t we all like consistency?), that is not however what it “is”. It “is” potential continuously resolving. The masterwork that is the mind is a hyoerdimensional and extradimentional supercomputer (that gets us by yet goes mostly squandered). Our minds and peripherals can manipulate, break down, and remake existential reality in the likeness of our own images. You seem to complain your own image is soiled by your other inputs or predispositions.

Sure, it’s a lot of work yet that’s what this whole universe thing runs on. Potential. State is what it collapses into in the moment of “now”.

And you’re right, continuity is an illusion. Oops.

Huge amounts of data and processing power are arguably the foundation for the "Chinese room" thought experiment.
I never bought into Searle's argument with the Chinese room.

The rules for translation are themselves the result of intelligence; when the thought experiment is made real (I've seen an example on TV once), these rules are written down by humans, using human intelligence.

A machine which itself generates these rules from observation has at least the intelligence* that humans applied specifically in the creation of documents expressing the same rules.

That a human can mechanically follow those same rules without understanding them, says as much and as little as the fact that the DNA sequences within the neurones in our brains are not themselves directly conscious of higher level concepts such as "why is it so hard to type 'why' rather than 'wju' today?" despite being the foundation of the intelligence process of natural selection and evolution.

* well, the capability — I'm open to the argument that AI are thick due to the need for so many more examples than humans need, and are simply making up for it by being very very fast and squeezing the equivalent of several million years of experiences for a human into a month of wall-clock time.

I didn’t buy that argument at all either.

Minds shuffle information. Including about themselves.

Paper with information being shuffled by rules exhibiting intelligence and awareness of “self” is just ridiculously inefficient. Not inherently less capable.

I don’t think I understand this entirely. The point of the thought experiment is to assume the possibility of the room and consider the consequences. How it might be achievable in practice doesn’t alter this
The room is possible because there's someone inside with a big list of rules of what Chinese characters to reply with. This represents the huge amount of data processing and statistical power. When the thought expt was created, you could argue that the room was impossible, so the experiment was meaningless. But that's no longer the case.
if you go and s/Chinese Room/LLM against any of the counter arguments to the thought experiment how many of them does it invalidate?
I'm not sure I'm following you. My comment re Chinese room was that parent said the data processing we now have was unimaginable back in the day. In fact, it was imaginable - the Chinese room imagined it.
I was responding to the point that the thought experiment was meaningless.