My reply here feels weird to me because I think you basically get my point, but you’re one inferential gap away from my perspective. I’ll see if I can close that gap.
It’s true that the only way we get any glimpse at the territory is through our sensory experiences. However, the map that we build in response to this experience carries information that sets a lower bound on the causal complexity of the territory that generates it.
We need not assume there is anything more than experience, though. Through experience we might infer the existence of some external reality that sense data is about (this is a realist perspective), and as you say this gives us evidence that perhaps the world really does have some structure external to our experience, but we need not assume it to be so.
This is perhaps a somewhat subtle distinction, but the point is to shift as much as possible from assumption to inference. If we take an arealist stance and do not assume realism, we may still come to infer it based on the evidence we collect. This is, arguably, better even if most of the time it doesn’t produce different results because now everything about external reality in our thinking exists firmly within our minds rather than outside of them where we can say nothing about them, and now we can make physical claims about the possibility of an external reality rather than metaphysical assumptions about an external reality.
This is perhaps a somewhat subtle distinction, but the point is to shift as much as possible from assumption to inference. If we take an arealist stance and do not assume realism, we may still come to infer it based on the evidence we collect.
I think I can agree with this.
One caveat would be to note that the brain’s map-making algorithm does make some implicit assumptions about the nature of the territory. For instance, it needs to assume that it’s modeling an actual generative process with hierarchical and cross-modal statistical regularities. It further assumes, based on what I understand about how the cortex learns, that the territory has things like translational equivariance and spatiotemporally local causality.
The cortex (and cerebellum, hippocampus, etc.) has built-in structural and dynamical priors that it tries to map its sensory experiences to, which limits the hypothesis space that it searches when it infers things about the territory. In other words, it makes assumptions.
On the other hand, it is a bit of a theme around here that we should be able to overcome such cognitive biases when trying to understand reality. I think you’re on the right track in trying to peel back the assumptions that evolution gave us (even the more seemingly rational ones like splitting map from territory) to ground our beliefs as solidly as possible.
My reply here feels weird to me because I think you basically get my point, but you’re one inferential gap away from my perspective. I’ll see if I can close that gap.
We need not assume there is anything more than experience, though. Through experience we might infer the existence of some external reality that sense data is about (this is a realist perspective), and as you say this gives us evidence that perhaps the world really does have some structure external to our experience, but we need not assume it to be so.
This is perhaps a somewhat subtle distinction, but the point is to shift as much as possible from assumption to inference. If we take an arealist stance and do not assume realism, we may still come to infer it based on the evidence we collect. This is, arguably, better even if most of the time it doesn’t produce different results because now everything about external reality in our thinking exists firmly within our minds rather than outside of them where we can say nothing about them, and now we can make physical claims about the possibility of an external reality rather than metaphysical assumptions about an external reality.
I think I can agree with this.
One caveat would be to note that the brain’s map-making algorithm does make some implicit assumptions about the nature of the territory. For instance, it needs to assume that it’s modeling an actual generative process with hierarchical and cross-modal statistical regularities. It further assumes, based on what I understand about how the cortex learns, that the territory has things like translational equivariance and spatiotemporally local causality.
The cortex (and cerebellum, hippocampus, etc.) has built-in structural and dynamical priors that it tries to map its sensory experiences to, which limits the hypothesis space that it searches when it infers things about the territory. In other words, it makes assumptions.
On the other hand, it is a bit of a theme around here that we should be able to overcome such cognitive biases when trying to understand reality. I think you’re on the right track in trying to peel back the assumptions that evolution gave us (even the more seemingly rational ones like splitting map from territory) to ground our beliefs as solidly as possible.