Keeping everything generally "human readable" also the advantage of it being easier for me to review later if needed.
As you said, that "other person" might be me too. Same reason I comment code. There's another person reading it, most likely that other person is "me, but next week and with zero memory of this".
We do like anthropomorphising the machines, but I try to think they enjoy it...
What even is thinking and reasoning if these models aren't doing it?
Among many other factors, perhaps the most key differentiator for me that prevents me describing these as thinking, is proactivity.
LLMs are never pro-active.
( No, prompting them on a loop is not pro-activity ).
Human brains are so proactive that given zero stimuli they will hallucinate.
As for reasoning, they simply do not. They do a wonderful facsimile of reasoning, one that's especially useful for producing computer code. But they do not reason, and it is a mistake to treat them as if they can.
LLMs are amazing, but they represent a very narrow slice of what thinking is. Living beings are extremely dynamic and both much more complex and simple at the same time.
There is a reason for:
- companies releasing new versions every couple of months
- LLMs needing massive amounts of data to train on that is produced by us and not by itself interacting with the world
- a massive amount of manual labor being required both for data labeling and for reinforcement learning
- them not being able to guide through a solution, but ultimately needing guidance at every decision point