I think it’s worth distinguishing between cases where being correct has pragmatic advantages, and cases where it doesn’t.
For example, there are plenty of people who reject the philosophical underpinnings of modern medicine. There always have been. Those people don’t go to medical school, don’t go to conferences, etc.; eventually a whole discussion emerges to which they are not invited. Believers in medicine don’t convince the believers in homeopathy, they ignore them and concentrate on doing medicine.
And because doctors achieve more valuable things more reliably than homeopaths, over time they displace the homeopaths… they create their own community within which a belief in medicine is pervasive. That the homeopaths are not convinced isn’t actually important; it just means they aren’t part of that community.
Of course, if believing in homeopathy doesn’t correlate with skill at carpentry, then the medicine/homeopathy disagreement may continue to exist among carpenters. But so what? Who cares what carpenters think about medicine? Why is resolving that disagreement worth devoting energy to? Better for medicinists to devote their efforts to advancing medicine.
Similarly, there are plenty of people who reject atheism. There always will be. The thing for the atheists to do is work in areas where atheism gives them a pragmatic advantage. Over time they will displace the theists in that area, and the atheism/theism disagreement will disappear in that area. If atheism confers no demonstrable advantage to carpenters, then the disagreement will continue to exist among carpenters. But, again, so what?
By the same token, if a philosophical problem turns out to not have any pragmatic implications—that is, if there is no area where people with the correct answer can do something valuable that people with the incorrect answer can’t do, or can’t do as well—then the disagreement will continue to exist everywhere. But, again, so what?
Within the more practical sorts of philosophy, like logic, epistemology, and morality, there are potentially huge gains to society for getting it right. But these can only be “tested” (in the sense of creating a society that revolves around certain philosophical ideas) on a multi-decade time scale with a huge investment of resources and possible human suffering if you’re wrong, and all experiments are necessarily imperfect (communists still argue their principles would have worked if the situation had been different).
That means there are practical gains from having good philosophy, but not in a way that means it can be decided by experiment.
Sure, maybe we’d see huge pragmatic gains after everyone was (for example) an atheist for a century but there just aren’t smaller gains to be realized from atheism at smaller scales.
My inclination is to distrust anyone who claims that the theory they advocate can only be tested by an apparatus too impractical to build, and is necessarily untestable on any scale small enough to actually test… but I concede that it’s possible.
And agreed that my examples aren’t good analogies for that sort of situation.
Insofar as that post is asserting that the author gets warm fuzzies by caring about my being wrong, and signaling that caring through argument… well, I don’t object to that. It’s actually kind of sweet.
Insofar as it is asserting that it’s useful for him to argue with me whenever I say something wrong, relative to spending the same energy on other projects… well, I observe that the author doesn’t actually do that, given the choice. Which leads me to believe that he doesn’t actually believe that. (Nor do I.)
But but … warm fuzzies are (an important species of) utilons.
And, perhaps more important, rationality isn’t wholly goal-directed, so I expect plenty of experts to continue to try to convince carpenters. Rationalists reason—that’s how we roll. Of course, one can always redefine “the goal” to include the exercise of reason, as such (which would still misrepresent what is more nearly a habit than a goal). Hmm—this topic is quite the can of worms. I might open it properly later.
I think this exchange has become completely unmoored at this point.
I have no objection to reasoning about stuff for the fun of it, or out of habit, or to signal one’s in-group status or one’s superiority, or various other things. And I agree with you that many people, most especially soi-disant rationalists, do this all the time.
But I very much doubt that this is what Plasmon was getting at, or what EY was talking about, or what Yvain was talking about.
I think it’s worth distinguishing between cases where being correct has pragmatic advantages, and cases where it doesn’t.
For example, there are plenty of people who reject the philosophical underpinnings of modern medicine. There always have been. Those people don’t go to medical school, don’t go to conferences, etc.; eventually a whole discussion emerges to which they are not invited. Believers in medicine don’t convince the believers in homeopathy, they ignore them and concentrate on doing medicine.
And because doctors achieve more valuable things more reliably than homeopaths, over time they displace the homeopaths… they create their own community within which a belief in medicine is pervasive. That the homeopaths are not convinced isn’t actually important; it just means they aren’t part of that community.
Of course, if believing in homeopathy doesn’t correlate with skill at carpentry, then the medicine/homeopathy disagreement may continue to exist among carpenters. But so what? Who cares what carpenters think about medicine? Why is resolving that disagreement worth devoting energy to? Better for medicinists to devote their efforts to advancing medicine.
Similarly, there are plenty of people who reject atheism. There always will be. The thing for the atheists to do is work in areas where atheism gives them a pragmatic advantage. Over time they will displace the theists in that area, and the atheism/theism disagreement will disappear in that area. If atheism confers no demonstrable advantage to carpenters, then the disagreement will continue to exist among carpenters. But, again, so what?
By the same token, if a philosophical problem turns out to not have any pragmatic implications—that is, if there is no area where people with the correct answer can do something valuable that people with the incorrect answer can’t do, or can’t do as well—then the disagreement will continue to exist everywhere. But, again, so what?
Within the more practical sorts of philosophy, like logic, epistemology, and morality, there are potentially huge gains to society for getting it right. But these can only be “tested” (in the sense of creating a society that revolves around certain philosophical ideas) on a multi-decade time scale with a huge investment of resources and possible human suffering if you’re wrong, and all experiments are necessarily imperfect (communists still argue their principles would have worked if the situation had been different).
That means there are practical gains from having good philosophy, but not in a way that means it can be decided by experiment.
Sure, maybe we’d see huge pragmatic gains after everyone was (for example) an atheist for a century but there just aren’t smaller gains to be realized from atheism at smaller scales.
My inclination is to distrust anyone who claims that the theory they advocate can only be tested by an apparatus too impractical to build, and is necessarily untestable on any scale small enough to actually test… but I concede that it’s possible.
And agreed that my examples aren’t good analogies for that sort of situation.
Your Rationality is My Business argues against this idea.
Insofar as that post is asserting that the author gets warm fuzzies by caring about my being wrong, and signaling that caring through argument… well, I don’t object to that. It’s actually kind of sweet.
Insofar as it is asserting that it’s useful for him to argue with me whenever I say something wrong, relative to spending the same energy on other projects… well, I observe that the author doesn’t actually do that, given the choice. Which leads me to believe that he doesn’t actually believe that. (Nor do I.)
Let’s not confuse warm fuzzies and utilons.
But but … warm fuzzies are (an important species of) utilons.
And, perhaps more important, rationality isn’t wholly goal-directed, so I expect plenty of experts to continue to try to convince carpenters. Rationalists reason—that’s how we roll. Of course, one can always redefine “the goal” to include the exercise of reason, as such (which would still misrepresent what is more nearly a habit than a goal). Hmm—this topic is quite the can of worms. I might open it properly later.
I think this exchange has become completely unmoored at this point.
I have no objection to reasoning about stuff for the fun of it, or out of habit, or to signal one’s in-group status or one’s superiority, or various other things. And I agree with you that many people, most especially soi-disant rationalists, do this all the time.
But I very much doubt that this is what Plasmon was getting at, or what EY was talking about, or what Yvain was talking about.