I think, when I say “model”, I have in mind something very broad like “a model is a thing that can be used for predictions, and is trained specifically to be good at predictions, e.g. by self-supervised learning”, and when you read the word “model”, you have in mind something very narrow, maybe “a model is something that is just like the model in AlphaZero or other such ML papers”.
For example, I can ask you “what will happen if I do X?” and you might say “If you do X, then Y will happen … oh wait, maybe Z will happen … umm, I’m not sure”. That would never happen in the “model” of AlphaZero. The “model” of AlphaZero takes in actions (moves) and spits out a board position, and this answer is clean and unique and (in the case of AlphaZero but not MuZero) guaranteed-to-be-correct. Obviously the kind of “model” built by the brain is not like that. Sometimes it issues somewhat-self-contradictory predictions and so on.
The thing you mention about split-brain patients is an extreme version, but I think it’s on a continuum with more mundane things like “if I think about it in this way, I predict X, and if I think about it in a different way, I predict Y”. Nevertheless, we are obviously able to make good predictions about the future, and we do so a zillion times a day—“I’m going to walk to the light-switch and flip it off” involves a model-based prediction that we are capable of straightforwardly walking to the light-switch and switching it off, and that if we do so, the switch will stay off and the room will be dark.
Those kinds of predictions (I claim) have all the properties that make it “a model” in my book: what we expect is not always what we want, and what we expect is much more likely to actualize than chance, and mistaken expectations tend to lead to model updates in a direction that will reduce the error in similar situations in the future. Yes it’s kinda messy, like sometimes your temporal lobe can’t reach perfect consensus with your parietal lobe, or your left hemisphere with your right hemisphere, and sometimes “what we expect” has other kinds of self-inconsistencies, etc. But it’s still definitely “a model”, in the (broad) way I use the term. :)
I think, when I say “model”, I have in mind something very broad like “a model is a thing that can be used for predictions, and is trained specifically to be good at predictions, e.g. by self-supervised learning”, and when you read the word “model”, you have in mind something very narrow, maybe “a model is something that is just like the model in AlphaZero or other such ML papers”.
For example, I can ask you “what will happen if I do X?” and you might say “If you do X, then Y will happen … oh wait, maybe Z will happen … umm, I’m not sure”. That would never happen in the “model” of AlphaZero. The “model” of AlphaZero takes in actions (moves) and spits out a board position, and this answer is clean and unique and (in the case of AlphaZero but not MuZero) guaranteed-to-be-correct. Obviously the kind of “model” built by the brain is not like that. Sometimes it issues somewhat-self-contradictory predictions and so on.
The thing you mention about split-brain patients is an extreme version, but I think it’s on a continuum with more mundane things like “if I think about it in this way, I predict X, and if I think about it in a different way, I predict Y”. Nevertheless, we are obviously able to make good predictions about the future, and we do so a zillion times a day—“I’m going to walk to the light-switch and flip it off” involves a model-based prediction that we are capable of straightforwardly walking to the light-switch and switching it off, and that if we do so, the switch will stay off and the room will be dark.
Those kinds of predictions (I claim) have all the properties that make it “a model” in my book: what we expect is not always what we want, and what we expect is much more likely to actualize than chance, and mistaken expectations tend to lead to model updates in a direction that will reduce the error in similar situations in the future. Yes it’s kinda messy, like sometimes your temporal lobe can’t reach perfect consensus with your parietal lobe, or your left hemisphere with your right hemisphere, and sometimes “what we expect” has other kinds of self-inconsistencies, etc. But it’s still definitely “a model”, in the (broad) way I use the term. :)