Papers show that AI also has a world model, so I don't think that's the right distinction.
Could you please cite these papers. If by AI you mean LLMs, that is not supported by what I know. If you mean a theoretical world-model-based AI, that's just a tautological statement.
One conference proceeding paper and one preprint, about LLMs encoding either relative geometric information of objects or simple 2D paths.
One of the papers call this "programming language semantics", but it is using a 2D grid navigation DSL. The semantics of that language are nothing like actual programming language semantics.
These are not the same as the concept being discussed here, a human "world model" of a computer system, through which to interpret the semantics of a program.
loading story #47292117
Their world model is completely a byproduct of language though, not experience. Furthermore, they by deliberate design do not maintain any form of self-recognition or narrative tracking, which is the necessary substrate for developing validating experience. The world model of an LLM is still a map. Not the territory. Even though ours has some of the same qualities arguably, the identity we carry with us and our self-narrative are incredibly powerful in terms of allowing us to maintain alignment with the world as she is without munging it up quite as badly as LLM's seem prone to.
loading story #47289726