Whatever the bottom level of our understanding of the map, even a one-level map is still above the territory, so there’re still levels below that which carry back to, presumedly, territory. We find some fields-and-forces model that accounts for all the data we’re aware of. But, its always going to be possible—less likely the more data we get—that something flies along and causes us to modify it. So, if we wanted to continue the reductionistic approach about the model we’re making about our world, stripping away higher level abstractions, we’d say that its an in-process unifying simplification of and minimal inferences from the results of many experiments, which correspond to measurements of the world at certain levels of sensitivity by different means.
Like, I can draw a picture of a face in increasingly finer and finer detail down to “all the detail I see” but its still going to contain unifying assumptions—like a vector representation of a face, versus the data, which may be pixellated—made up of specific individual measurement events. Or I can show a chart of where and how all the nerves are excited in my eyes, which are the ‘raw data’ level stuff that I have access to about what’s ‘out there’, for which the simplest explanation is most probably a face. Actually its kind of interesting to think of it that way because a lot of our raw mental data is ‘vectored’ already. But, whenever we do a linear regression of a dataset, that’s also a reduction-to-a-vector of something.
Whatever the bottom level of our understanding of the map, even a one-level map is still above the territory, so there’re still levels below that which carry back to, presumedly, territory. We find some fields-and-forces model that accounts for all the data we’re aware of. But, its always going to be possible—less likely the more data we get—that something flies along and causes us to modify it. So, if we wanted to continue the reductionistic approach about the model we’re making about our world, stripping away higher level abstractions, we’d say that its an in-process unifying simplification of and minimal inferences from the results of many experiments, which correspond to measurements of the world at certain levels of sensitivity by different means.
Like, I can draw a picture of a face in increasingly finer and finer detail down to “all the detail I see” but its still going to contain unifying assumptions—like a vector representation of a face, versus the data, which may be pixellated—made up of specific individual measurement events. Or I can show a chart of where and how all the nerves are excited in my eyes, which are the ‘raw data’ level stuff that I have access to about what’s ‘out there’, for which the simplest explanation is most probably a face. Actually its kind of interesting to think of it that way because a lot of our raw mental data is ‘vectored’ already. But, whenever we do a linear regression of a dataset, that’s also a reduction-to-a-vector of something.