I’m not sure the cost of privately held false beliefs is as low as you think it is. The universe is heavily Causally Entangled. Now even if in your example, the shape of the earth isn’t causally entangled with anything our mechanic cares about, that doesn’t get you off the hook. A false belief can shoot you in the foot in at least two ways. First, you might explicitly use it to reason about the value of some other variable in your causal graph. Second, your intuition might draw on it as an analogy when you are reasoning about something else.
If our car mechanic thinks his planet is a disc supported atop an infinite pile of turtles, when this is in fact not the case, then isn’t he more likely to conclude that other things which he may actually come into more interaction with (such as a complex device embedded inside a car which could be understood by our mechanic, if he took it apart and then took the pieces apart about five times) might also be “turtles all the way down”? If I actually lived on a disc on top of infinitely many turtles, then I would be nowhere near as reluctant to conclude that I had a genuine fractal device on my hands. If I actually lived in a world which was turtles all the way down, I would also be much more disturbed by paradoxes involving backward supertasks.
To sum up: False beliefs don’t contaminate your belief pool via the real links in the causal network in reality; they contaminate your belief pool via the associations in your mind.
This is what I meant by epistemology. It’s not the bad beliefs causing bad epistemology (with certain exceptions, like some instances of religion, in which people may mess up their epistemology to retain their beliefs), but the bad epistemology causing the beliefs. I picked a bit too extreme an example to illustrate my point, and made note of alternative examples in the original.
If I told the car mechanic, “Actually, the Earth revolves around the Sun, which is one star among billions in one galaxy among billions, and you should believe me because God told me so,” and he changes his beliefs accordingly, he’s not really any better off than he was. The problem is not his belief, it’s his system for validating beliefs.
By contrast, if I actually explained why that statement was true and he said, “Well, duh, of course I was wrong! I really should have looked into that!” then I’d say he never had much of a problem to begin with, other than a lack of curiosity.
I’m not sure what the relationship between metaphors propagating in someone’s thinking and the causal entanglement of the universe is.
I’d argue that people profit from having different ways to look at the world—even though it shares a common structure, this isn’t always locally noticeable or important, and certainly things can look different at different scales. I’m equally unsure that it matters whether or not you see an object that is fractal for the scales of relevance to you and assume it is truly fractal or just a repeating pattern on a few scales.
I agree with Psychohistorian that it’s more important that the mechanic be willing to abandon his belief with greater knowledge of the physics of the universe. But even then, facility with fractal thinking may still offer benefits.
That is: The associations in your mind are put to constant test when it comes to encountering the real world. Certainly long-term, serious misconceptions—liking seeing God in everything and missing insights into natural truth—can be quite a thing to overcome and can stifle certain important lines of thought. But for any beliefs you get from reading inadequately informed science journalism—well, the ways of thinking your mind’s going to be contaminated with are those that are prevalent in our culture, so you probably encounter them anyway. They’re also things that seem plausible to you, given your background, so you again probably already think in these terms, or the interconnectedness with all the other observations of life you’ve had is too small to distinguish between two alternate explanations—the false one you’ve just read and the real truth, which is “out there” still. And if scientific results were really so obvious from what we already know about the universe, research would be a lot less important—rather, it is because scientific findings can offer counter-intuitive results, ways of thinking that we DON’T find useful or essential in everyday life, that we find them so intriguing.
I’m not sure the cost of privately held false beliefs is as low as you think it is. The universe is heavily Causally Entangled. Now even if in your example, the shape of the earth isn’t causally entangled with anything our mechanic cares about, that doesn’t get you off the hook. A false belief can shoot you in the foot in at least two ways. First, you might explicitly use it to reason about the value of some other variable in your causal graph. Second, your intuition might draw on it as an analogy when you are reasoning about something else.
If our car mechanic thinks his planet is a disc supported atop an infinite pile of turtles, when this is in fact not the case, then isn’t he more likely to conclude that other things which he may actually come into more interaction with (such as a complex device embedded inside a car which could be understood by our mechanic, if he took it apart and then took the pieces apart about five times) might also be “turtles all the way down”? If I actually lived on a disc on top of infinitely many turtles, then I would be nowhere near as reluctant to conclude that I had a genuine fractal device on my hands. If I actually lived in a world which was turtles all the way down, I would also be much more disturbed by paradoxes involving backward supertasks.
To sum up: False beliefs don’t contaminate your belief pool via the real links in the causal network in reality; they contaminate your belief pool via the associations in your mind.
This is what I meant by epistemology. It’s not the bad beliefs causing bad epistemology (with certain exceptions, like some instances of religion, in which people may mess up their epistemology to retain their beliefs), but the bad epistemology causing the beliefs. I picked a bit too extreme an example to illustrate my point, and made note of alternative examples in the original.
If I told the car mechanic, “Actually, the Earth revolves around the Sun, which is one star among billions in one galaxy among billions, and you should believe me because God told me so,” and he changes his beliefs accordingly, he’s not really any better off than he was. The problem is not his belief, it’s his system for validating beliefs.
By contrast, if I actually explained why that statement was true and he said, “Well, duh, of course I was wrong! I really should have looked into that!” then I’d say he never had much of a problem to begin with, other than a lack of curiosity.
I’m not sure what the relationship between metaphors propagating in someone’s thinking and the causal entanglement of the universe is.
I’d argue that people profit from having different ways to look at the world—even though it shares a common structure, this isn’t always locally noticeable or important, and certainly things can look different at different scales. I’m equally unsure that it matters whether or not you see an object that is fractal for the scales of relevance to you and assume it is truly fractal or just a repeating pattern on a few scales.
I agree with Psychohistorian that it’s more important that the mechanic be willing to abandon his belief with greater knowledge of the physics of the universe. But even then, facility with fractal thinking may still offer benefits.
That is: The associations in your mind are put to constant test when it comes to encountering the real world. Certainly long-term, serious misconceptions—liking seeing God in everything and missing insights into natural truth—can be quite a thing to overcome and can stifle certain important lines of thought. But for any beliefs you get from reading inadequately informed science journalism—well, the ways of thinking your mind’s going to be contaminated with are those that are prevalent in our culture, so you probably encounter them anyway. They’re also things that seem plausible to you, given your background, so you again probably already think in these terms, or the interconnectedness with all the other observations of life you’ve had is too small to distinguish between two alternate explanations—the false one you’ve just read and the real truth, which is “out there” still. And if scientific results were really so obvious from what we already know about the universe, research would be a lot less important—rather, it is because scientific findings can offer counter-intuitive results, ways of thinking that we DON’T find useful or essential in everyday life, that we find them so intriguing.