Hacker News new | past | comments | ask | show | jobs | submit
Language seems to be confused with logic or common sense.

We've observed it previously in psychiatry(and modern journalism, but here I digress) but LLMs have made it obvious that grammatically correct, naturally flowing language requires a "world" model of the language and close to nothing of reality, spatial understanding? social clues? common sense logic? or mathematical logic? All optional.

I'd suggest we call the LLM language fundament a "Word Model"(not a typo).

Trying to distil a world model out of the word model. A suitable starting point for a modern remake of Plato's cave.

I am baffled that people have to continue making this argument over and over and over. Your rationale makes total sense to me, but the debate rages on whether or not LLMs are more than just words.

Articles like this only seem to confirm that any reasoning is an illusion based on probabilistic text generation. Humans are not carefully writing out all the words of this implicit reasoning, so the machine cant appear to mimic them.

What am I missing that makes this debatable at all?

I don’t think there are any reasonable arguments against that point, but “LLMs are more than just words” is sort of unfalsifiable, so you can never convince someone otherwise if they’re really into that idea.

From a product point of view, sometimes all you need is Plato’s cave (to steal that from the OC) to make a sale, so no company has incentive to go against the most hype line of thought either.

loading story #42007403
loading story #42007738
People have faith that phenomenon is explainable in a way which is satisfying to their world view and then when evidence comes to the contrary, only then can the misunderstanding be deflated.
Language is the tool we use to codify a heuristic understanding of reality. The world we interact with daily is not the physical one, but an ideological one constructed out of human ideas from human minds. This is the world we live in and the air we breath is made of our ideas about oxygenation and partly of our concept of being alive.

It's not that these "human tools" for understanding "reality" are superfluous, it's just that they ar second-order concepts. Spatial understandings, social cues, math, etc. Those are all constructs built WITHIN our primary linguistic ideological framing of reality.

To put this in coding terms, why would an LLM use rails to make a project when it could just as quickly produce a project writing directly to the socket.

To us these are totally different tasks and would actually require totally different kinds of programmers but when one language is another language is everything, the inventions we made to expand the human brain's ability to delve into linguistic reality are no use.

I can suggest one reason why LLM might prefer writing in higher level language like Ruby vs assembly. The reason is the same as why physicists and mathematicians like to work with complex numbers using "i" instead of explicit calculation over 4 real numbers. Using "i" allows us to abstract out and forget the trivial details. "i" allows us to compress ideas better. Compression allows for better prediction.
except LLMs are trained on higher level languages. Good luck getting you LLM to write your app entirely in assembly. There just isn’t enough training data.
But in theory, with what training data there IS available on how to write in assembly, combined with the data available on what's required to build an app, shouldn't a REAL AI be able to synthesize the knowledge necessary to write a webapp in assembly? To me, this is the basis for why people criticize LLMs, if something isn't in the data set, it's just not conceivable by the LLM.
Yes. There is just no way of knowing how many more watts of energy it may need to reach that level of abstraction and depth - maybe on more watt, maybe never.

And the random noise in the process could prevent it from ever being useful, or it could allow it to find a hyper-efficient clever way to apply cross-language transfer learning to allow a 1->1 mapping of your perfectly descriptive prompt to equivalent ASM....but just this one time.

There is no way to know where performance per parameter plateaus; or appears to on a projection, or actually does... or will, or deceitful appears to... to our mocking dismay.

As we are currently hoping to throw power at it (we fed it all the data), I sure hope it is not the last one.

There isn't that much training data on reverse engineering Python bytecode, but in my experiments ChatGPT can reconstruct a (unique) Python function's source code from its bytecode with high accuracy. I think it's simulating the language in the way you're describing.
I don’t buy this. My child communicates with me using emotion and other cues because she can’t speak yet. I don’t know much about early humans or other sapiens but I imagine they communicated long before complex language evolved. These other means of communication are not second order, they are first order.
loading story #42007863
loading story #42007874
It’s in the name: Language Model, nothing else.
I think the previous commenter chose "word" instead of "language" to highlight that a grammatically correct, naturally flowing chain of words is not the same as a language.

Thus, Large Word Model (LWM) would be more precise, following his argument.

I'm not sure the best way to describe what it is that LLMs have had to learn to do what they do - minimize next word errors. "World model" seems misleading since they don't have any experience with the real world, and even in their own "world of words" they are just trained as passive observers, so it's not even a world-of-words model where they have learnt how this world responds to their own output/actions.

One description sometimes suggested is that they have learnt to model the (collective average) generative processes behind their training data, but of course they are doing this without knowing what the input was to that generative process - WHY the training source said what it did - which would seem to put a severe constraint on their ability to learn what it means. It's really more like they are modelling the generative process under false assumption that it is auto-regressive, rather than reacting to a hidden outside world.

The tricky point is that LLMs have clearly had to learn something at least similar to semantics to do a good job of minimizing prediction errors, although this is limited both by what they architecturally are able to learn, and what they need to learn for this task (literally no reward for learning more beyond what's needed for predict next word).

Perhaps it's most accurate to say that rather than learning semantics they've learned deep predictive contexts (patterns). Maybe if they were active agents, continuously learning from their own actions then there wouldn't be much daylight between "predictive contexts" and "semantics", although I think semantics implies a certain level of successful generalization (& exception recognition) to utilize experience in novel contexts. Looking at the failure modes of LLMs, such as on the farmer crossing river in boat puzzles, it seems clear they are more on the (exact training data) predictive context end of the spectrum, rather than really having grokked the semantics.

I suggested "word model" because it's a catchy pun on "world model".

It's still a language and not merely words. But language is correct even when it wildly disagrees with everyday existence as we humans know it. I can say that "a one gallon milk jug easily contains 2000 liters of milk" and it's language in use as language.

There is a four part documentary by Stephen Fry called "Planet Word". Worth watching.
Bingo, great reply! This is what I've been trying to explain to my wife. LLM's use fancy math and our language examples to reproduce our language but have no thoughts are feelings.
Yes but the initial training sets did have thoughts and feeling behind them and those are reflected back to the user in the output (with errors)
non c'est un pipe

Ability to generate words describing emotions are not the same thing as the LLM having real emotions

loading story #42008227
I sometimes wonder how they’d do if trained on relatively rigid, language like Japanese that has far fewer ambiguities than English.
Hi I’m just a random internet stranger passing by and was intrigued by Plato’s Cave as I’m not a fancy person who reads books. GPT-4o expanded for you quite well, but I’m not sure how I feel about it…

Using AI how I just did feels like cheating on an English class essay by using spark notes, getting a B+, and moving right on to the next homework assignment.

On one hand, I didn’t actually read Plato to learn and understand this connection, nor do I have a good authority to verify if this output is a good representation of his work in the context of your comment.

And yet, while I’m sure students could always buy or loan out reference books to common student texts in school, AI now makes this “spark notes” process effectively a commodity for almost any topic, like having a cross-domain low-cost tutor instantly available at all time.

I like the metaphor that calculators did to math what LLMs will do for language, but I don’t really know what that means yet

GPT output:

“““ The reference to Plato’s Cave here suggests that language models, like the shadows on the wall in Plato’s allegory, provide an imperfect and limited representation of reality. In Plato’s Cave, prisoners are chained in a way that they can only see shadows projected on the wall by objects behind them, mistaking these shadows for the whole of reality. The allegory highlights the difference between the superficial appearances (shadows) and the deeper truth (the actual objects casting the shadows).

In this analogy, large language models (LLMs) produce fluent and grammatically correct language—similar to shadows on the wall—but they do so without direct access to the true “world” beyond language. Their understanding is derived from patterns in language data (“Word Model”) rather than from real-world experiences or sensory information. As a result, the “reality” of the LLMs is limited to linguistic constructs, without spatial awareness, social context, or logic grounded in physical or mathematical truths.

The suggestion to call the LLM framework a “Word Model” underscores that LLMs are fundamentally limited to understanding language itself rather than the world the language describes. Reconstructing a true “world model” from this “word model” is as challenging as Plato’s prisoners trying to understand the real world from the shadows. This evokes the philosophical task of discerning reality from representation, making a case for a “modern remake of Plato’s Cave” where language, not shadows, limits our understanding of reality. ”””

loading story #42006373
loading story #42005656
loading story #42005662