You are drawing a distinction between agents that maintain a probability distribution over possible states and those that don’t and you’re putting humans in the latter category. It seems clear to me that all agents are always doing what you describe in (2), which I think clears up what you don’t like about it.
It also seems like humans spend varying amounts of energy on updating probability distributions vs. predicting within a specific model, but I would guess that LLMs can learn to do the same on their own.
You are drawing a distinction between agents that maintain a probability distribution over possible states and those that don’t and you’re putting humans in the latter category. It seems clear to me that all agents are always doing what you describe in (2), which I think clears up what you don’t like about it.
It also seems like humans spend varying amounts of energy on updating probability distributions vs. predicting within a specific model, but I would guess that LLMs can learn to do the same on their own.