LLMs are literally goal machines. It’s all they do. So it’s important that you input specific goals for them to work towards. It’s also why logically you want to break the problem into many small problems with concrete goals.
Do you only mean instruct-tuned LLMs? Or the base (pretrained) model too?
The entire system and the agent loop allows for more complex goal resolution. The LLM
models language (obviously) and language is goal oriented so it models goal oriented language. It’s an emergent feature of the system.