> In short: LLM have no concept, or even desire to produce of truth
They do produce true statements most of the time, though.
That's just because true statements are more likely to occur in their training corpus.
The overwhelming majority of true statements isn't in the training corpus due to a combinatorial explosion. What it means that they are more likely to occur there?
The training set is far too small for that to explain it.
Try to explain why one shotting works.
Uh, to explain what? You probably read something into what I said while I was being very literal.
If you train an LLM on mostly false statements, it will generate both known and novel falsehoods. Same for truth.
An LLM has no intrinsic concept of true or false, everything is a function of the training set. It just generates statements similar to what it has seen and higher-dimensional analogies of those .