Isn’t a modern LLM with thinking tokens fairly goal directed? But yes, we hallucinate in our sleep while LLMs will hallucinate details if the prompt isn’t grounded enough.
The thing about dream logic is that it can be a completely rational series of steps, but there's usually a giant plot hole which you only realise the second you wake up.
This definitely matches my experience of talking to AI agents and chatbots. They can be extremely knowledgeable on arcane matters yet need to have obvious (to humans) assumptions pointed out to them, since they only have book smarts and not street smarts.
Assuming this is not a rhetorical question: no, it is not. The only "goal" is to maximize plausibility.
Again, how is that different from humans? I’m not going around trying to prove my code correct when I write it manually.
I write code to solve a problem. Not code that looks like it solves the problem if a non-technical client squints at it.
And if you don't prove your code, do you not design at all then? Do you never draw state diagrams?
Every design is an informal proof of the solution. Rarely I write formal proofs. Most of the time I write down enough for myself to be convinced that the desing solves the problem.