This is a feature of maps. Maps can’t model the map/territory correspondence unless they also create a simulated territory. Then you can ask whether the modeled map/territory correspondence is accurate, which creates another layer of modeling, ad infinitum.
This isn’t a real problem though. It’s more a display of the limitations of maps in the maps’ terms.
The standard way around this conundrum is to fold self-reference into your map, instead of just recursion. Then going “up” a layer of abstraction lands you right where you started.
…which is in fact part of what you’re doing by noticing this glitch.
But the problem vanishes when you stop insisting that the map include everything. If I point at a nearby road, and then point at a roadmap and say “That road over there is this line”, there’s no problem. You can follow the map/territory correspondence just fine. That’s what makes the roadmap potentially useful in the first place.
It’s just an issue when you try to encapsulate that process in a model. How are you modeling it? Ah, oops, groundless recursion.
Which is to say, maps aren’t the basis of thinking. They’re extensions of thinking.
Sadly, language is based on maps. So describing this clearly can be a pain.
Hence “The Tao that can be said is not the true Tao.”
How does one learn to recognize their first self-referential map? (Or if it’s instinctual, how did the first of our progenitors recognize the very first self-referential map?)
EDIT: I’m not even sure how it can be possible to deterimine if one’s maps are sufficiently self-referential. What do we compare it to?
This is a feature of maps. Maps can’t model the map/territory correspondence unless they also create a simulated territory. Then you can ask whether the modeled map/territory correspondence is accurate, which creates another layer of modeling, ad infinitum.
This isn’t a real problem though. It’s more a display of the limitations of maps in the maps’ terms.
The standard way around this conundrum is to fold self-reference into your map, instead of just recursion. Then going “up” a layer of abstraction lands you right where you started.
…which is in fact part of what you’re doing by noticing this glitch.
But the problem vanishes when you stop insisting that the map include everything. If I point at a nearby road, and then point at a roadmap and say “That road over there is this line”, there’s no problem. You can follow the map/territory correspondence just fine. That’s what makes the roadmap potentially useful in the first place.
It’s just an issue when you try to encapsulate that process in a model. How are you modeling it? Ah, oops, groundless recursion.
Which is to say, maps aren’t the basis of thinking. They’re extensions of thinking.
Sadly, language is based on maps. So describing this clearly can be a pain.
Hence “The Tao that can be said is not the true Tao.”
How does one learn to recognize their first self-referential map? (Or if it’s instinctual, how did the first of our progenitors recognize the very first self-referential map?)
EDIT: I’m not even sure how it can be possible to deterimine if one’s maps are sufficiently self-referential. What do we compare it to?