Hacker News new | past | comments | ask | show | jobs | submit
I agree, it's just easier to write requirements and refine things as if writing with a human. I no longer care that it risks anthropomorphising it, as that fight has long been lost. I prefer to focus on remembering it doesn't actually think/reason than not being polite to it.

Keeping everything generally "human readable" also the advantage of it being easier for me to review later if needed.

I also always imagine that if I'm joined by a colleague on this task they might have to read through my conversation and I want to make it clear to a human too.

As you said, that "other person" might be me too. Same reason I comment code. There's another person reading it, most likely that other person is "me, but next week and with zero memory of this".

We do like anthropomorphising the machines, but I try to think they enjoy it...

How can you use these models for any length of time and walk away with the understanding that they do not think or reason?

What even is thinking and reasoning if these models aren't doing it?

They produce wonderful results, they are incredibly powerful, but they do not think or reason.

Among many other factors, perhaps the most key differentiator for me that prevents me describing these as thinking, is proactivity.

LLMs are never pro-active.

( No, prompting them on a loop is not pro-activity ).

Human brains are so proactive that given zero stimuli they will hallucinate.

As for reasoning, they simply do not. They do a wonderful facsimile of reasoning, one that's especially useful for producing computer code. But they do not reason, and it is a mistake to treat them as if they can.

loading story #47397192
Thinking and reasoning cannot be abstracted away from the individual who experiences the thinking and reasoning itself and changes because of it.

LLMs are amazing, but they represent a very narrow slice of what thinking is. Living beings are extremely dynamic and both much more complex and simple at the same time.

There is a reason for:

- companies releasing new versions every couple of months

- LLMs needing massive amounts of data to train on that is produced by us and not by itself interacting with the world

- a massive amount of manual labor being required both for data labeling and for reinforcement learning

- them not being able to guide through a solution, but ultimately needing guidance at every decision point