I have an idea that I would like to float. It’s a rough metaphor that I’m applying from my mathematical background.
Map and Territory is a good way to describe the difference between beliefs and truth. But I wonder if we are too concerned with the One True Map as opposed to an atlas of pretty good maps. You might think that there is a silly distinction, but there are a few reason why it may not be.
First, different maps in the atlas may disagree with one another. For instance, we might have a series of maps that each very accurately describe a small area but become more and more distorted the farther we go out. Each ancient city state might have accurate maps of the surrounding farms for tax purposes but wildly guess what lies beyond a mountain range or desert. A map might also accurately describe the territory at one level of distance but simplify much smaller scales. The yellow pixel in a map of the US is actually an entire town, with roads and buildings and rivers and topography, not perfectly flat fertile farmland.
Or take another example. Suppose you have a virtual reality machine, one with a portable helmet with a screen and speakers, in a large warehouse, so that you can walk around this giant floor as if you were walking around this virtual world. Now, suppose two people are inserted into this virtual world, but at different places, so that when they meet in the virtual world, their bodies are actually a hundred yards apart in the warehouse, and if their bodies bump into each other in the warehouse, they think they are a hundred yards apart in the virtual world.
Thus, when we as rationalists are evaluating our maps and those of others, an argument by contradiction does not always work. That two maps disagree does not invalidate the maps. Instead, it should cause us to see where our maps are reliable and where they are not, where they overlap with each other or agree and are interchangeable and where only 1 will do. Even more controversially, we should examine maps that are demonstrably wrong in some places to see whether and where they are good maps. Moreover, it might be more useful to add an entirely new map to our atlas instead of trying to improve the resolution on one we already have or moving around the lines every so slightly as we bring it asymptotically closer to truth.
My lesson for the rationality dojo would thus be:
-Be comfortable that your atlas is not consistent. Learn how to use each map well and how they fit together. Recognize when others have good maps and figure out how to incorporate those maps into your atlas, even if they might seem inconsistent with what you already have.
If you noticed, this idea comes from Differential Geometry, where you use a collection (“atlas”) of overlapping charts/local homeomorphisms to R^n (“maps”) as a suitable structure for discussing manifolds.
I tend to agree that we frequently would do better to make do with an atlas of charts rather than seeking the One True Map. But I’m not sure I like the differential geometry metaphor. It is not the location on the globe which makes use of one chart more fruitful than another. It is the question of scale, or as a computer nerd might express it, how zoomed in you are. And I would prefer to speak of different models rather than different maps.
For example, at one level of zoom, we see the universe as non-deterministic due to QM. Zoom out a bit and you have billiard-ball atoms in a Newtonian billiard room. Zoom out a bit more and find non-deterministic fluctuations. Out a bit more and you have deterministic chemical thermodynamics (unless you are dealing with a Brusselator or some such).
But I would go farther than this. I would also claim that we shouldn’t imagine that these maps (as you zoom in) necessarily become better and better maps of the One True Territory. We should remain open to the idea that “It’s maps (or models, or turtles) all the way down”.
I think one place to look for this phenomenon is when in a debate, you seize upon someone’s hidden assumptions. When this happens, it usually feels like a triumph, that you have successfully uncovered an error in their thinking that invalidates a lot of what they have argued. And it is incredibly annoying to have one of your own hidden assumptions laid bare, because it is both embarrassing and means you have to redo a lot of your thinking.
But hidden assumptions aren’t bad. You have to make some assumptions to think through a problem anyway. You can only reason from somewhere to somewhere else. It’s a transitive operation. There has to be a starting point. Moreover, assumptions make thinking and computation easier. They decrease the complexity of the problem, which means you can figure out at least part of the problem. Assuming pi is 3.14 is good if you want an estimate of the volume of the Earth. But that is useless if you want to prove a theorem. So in the metaphor, maps are characterized by their assumptions/axioms.
When you come into contact with assumptions, you should make them as explicit as possible. But you should also be willing to provisionally accept others’ assumptions and think through their implications. And it is often useful to let that sit alongside your own set of beliefs as an alternate map, something that can shed light on a situation when your beliefs are inadequate.
This might be silly, but I tend to think there is no Truth, just good axioms. And oftentimes fierce debates come down to incompatible axioms. In these situations, you are better off making explicit both sets of assumptions, accepting that they are incompatible and perhaps trying on the other side’s assumptions to see how they fit.
Mostly agree. It’s really irritating and unproductive (and for me, all too frequent) when someone thinks they’ve got you nailed because they found a hidden assumption in your argument, but that assumption turns out to be completely uncontroversial, or irrelevant, or something your opponent relies on anyway.
Yes, people need to watch for the hidden assumptions they make, but they shouldn’t point out the assumptions others make unless they can say why it’s unreasonable and how its weakening would hurt the argument it’s being used for. “You’re assuming X!” is not, by itself, relevant counterargument.
I have an idea that I would like to float. It’s a rough metaphor that I’m applying from my mathematical background.
Map and Territory is a good way to describe the difference between beliefs and truth. But I wonder if we are too concerned with the One True Map as opposed to an atlas of pretty good maps. You might think that there is a silly distinction, but there are a few reason why it may not be.
First, different maps in the atlas may disagree with one another. For instance, we might have a series of maps that each very accurately describe a small area but become more and more distorted the farther we go out. Each ancient city state might have accurate maps of the surrounding farms for tax purposes but wildly guess what lies beyond a mountain range or desert. A map might also accurately describe the territory at one level of distance but simplify much smaller scales. The yellow pixel in a map of the US is actually an entire town, with roads and buildings and rivers and topography, not perfectly flat fertile farmland.
Or take another example. Suppose you have a virtual reality machine, one with a portable helmet with a screen and speakers, in a large warehouse, so that you can walk around this giant floor as if you were walking around this virtual world. Now, suppose two people are inserted into this virtual world, but at different places, so that when they meet in the virtual world, their bodies are actually a hundred yards apart in the warehouse, and if their bodies bump into each other in the warehouse, they think they are a hundred yards apart in the virtual world.
Thus, when we as rationalists are evaluating our maps and those of others, an argument by contradiction does not always work. That two maps disagree does not invalidate the maps. Instead, it should cause us to see where our maps are reliable and where they are not, where they overlap with each other or agree and are interchangeable and where only 1 will do. Even more controversially, we should examine maps that are demonstrably wrong in some places to see whether and where they are good maps. Moreover, it might be more useful to add an entirely new map to our atlas instead of trying to improve the resolution on one we already have or moving around the lines every so slightly as we bring it asymptotically closer to truth.
My lesson for the rationality dojo would thus be: -Be comfortable that your atlas is not consistent. Learn how to use each map well and how they fit together. Recognize when others have good maps and figure out how to incorporate those maps into your atlas, even if they might seem inconsistent with what you already have.
If you noticed, this idea comes from Differential Geometry, where you use a collection (“atlas”) of overlapping charts/local homeomorphisms to R^n (“maps”) as a suitable structure for discussing manifolds.
I tend to agree that we frequently would do better to make do with an atlas of charts rather than seeking the One True Map. But I’m not sure I like the differential geometry metaphor. It is not the location on the globe which makes use of one chart more fruitful than another. It is the question of scale, or as a computer nerd might express it, how zoomed in you are. And I would prefer to speak of different models rather than different maps.
For example, at one level of zoom, we see the universe as non-deterministic due to QM. Zoom out a bit and you have billiard-ball atoms in a Newtonian billiard room. Zoom out a bit more and find non-deterministic fluctuations. Out a bit more and you have deterministic chemical thermodynamics (unless you are dealing with a Brusselator or some such).
But I would go farther than this. I would also claim that we shouldn’t imagine that these maps (as you zoom in) necessarily become better and better maps of the One True Territory. We should remain open to the idea that “It’s maps (or models, or turtles) all the way down”.
What’s an example of people doing this?
I think one place to look for this phenomenon is when in a debate, you seize upon someone’s hidden assumptions. When this happens, it usually feels like a triumph, that you have successfully uncovered an error in their thinking that invalidates a lot of what they have argued. And it is incredibly annoying to have one of your own hidden assumptions laid bare, because it is both embarrassing and means you have to redo a lot of your thinking.
But hidden assumptions aren’t bad. You have to make some assumptions to think through a problem anyway. You can only reason from somewhere to somewhere else. It’s a transitive operation. There has to be a starting point. Moreover, assumptions make thinking and computation easier. They decrease the complexity of the problem, which means you can figure out at least part of the problem. Assuming pi is 3.14 is good if you want an estimate of the volume of the Earth. But that is useless if you want to prove a theorem. So in the metaphor, maps are characterized by their assumptions/axioms.
When you come into contact with assumptions, you should make them as explicit as possible. But you should also be willing to provisionally accept others’ assumptions and think through their implications. And it is often useful to let that sit alongside your own set of beliefs as an alternate map, something that can shed light on a situation when your beliefs are inadequate.
This might be silly, but I tend to think there is no Truth, just good axioms. And oftentimes fierce debates come down to incompatible axioms. In these situations, you are better off making explicit both sets of assumptions, accepting that they are incompatible and perhaps trying on the other side’s assumptions to see how they fit.
Mostly agree. It’s really irritating and unproductive (and for me, all too frequent) when someone thinks they’ve got you nailed because they found a hidden assumption in your argument, but that assumption turns out to be completely uncontroversial, or irrelevant, or something your opponent relies on anyway.
Yes, people need to watch for the hidden assumptions they make, but they shouldn’t point out the assumptions others make unless they can say why it’s unreasonable and how its weakening would hurt the argument it’s being used for. “You’re assuming X!” is not, by itself, relevant counterargument.
You might be interested in How to Lie with Maps.