An aside: I think “moderate relativism” is somewhat tautologically true, but I also think it’s a very abused and easy-to-abuse idea that shouldn’t be acknowledged with these terms. I think that perhaps saying morality is “value-centric” or “protocol-based” (each referring to a different part of “morality”. By the second part, I mean a social protocol for building consensus and coordination.) is a better choice of words. After all, relativism implies that, e.g., we can’t punish people who do honor killings. This is mostly false, and does not follow from the inherent arbitrariness of morality.
On our inability to fight bad epistemics: I think this is somewhat of an advantage. It seems to me that “traditional rationality” was/is mostly focused on this problem of consensus-truth, but LW abandoned that fort and instead saw that smarter, more rational people could do better for themselves if they stopped fighting the byzantine elements of more normal people. So in LW we speak of the importance of priors and Bayes, which is pretty much a mindkiller for “religious” (broadly conceived) people. A theist will just say that his prior in god is astronomical (which might actually be true) and so the current Bayes factor does not make him not believe. All in all, building an accurate map is a different skillset than making other people accept your map. It might be a good idea to treat them somewhat separately. My own suspicion is that there is something akin to the g factor for being rational, and of course, the g factor itself is highly relevant. So in my mind, I think making normal people “rational” might not even be possible. Sure, (immense) improvement is possible, but I doubt most people will come to “our” ways. For one, epistemic rationality often makes me one worse off by default, especially in more “normal” social settings. I have often contrasted my father’s intelligent irrationality with my own rationality, and he usually comes much ahead.
Thanks for the long reply.
An aside: I think “moderate relativism” is somewhat tautologically true, but I also think it’s a very abused and easy-to-abuse idea that shouldn’t be acknowledged with these terms. I think that perhaps saying morality is “value-centric” or “protocol-based” (each referring to a different part of “morality”. By the second part, I mean a social protocol for building consensus and coordination.) is a better choice of words. After all, relativism implies that, e.g., we can’t punish people who do honor killings. This is mostly false, and does not follow from the inherent arbitrariness of morality.
On our inability to fight bad epistemics: I think this is somewhat of an advantage. It seems to me that “traditional rationality” was/is mostly focused on this problem of consensus-truth, but LW abandoned that fort and instead saw that smarter, more rational people could do better for themselves if they stopped fighting the byzantine elements of more normal people. So in LW we speak of the importance of priors and Bayes, which is pretty much a mindkiller for “religious” (broadly conceived) people. A theist will just say that his prior in god is astronomical (which might actually be true) and so the current Bayes factor does not make him not believe. All in all, building an accurate map is a different skillset than making other people accept your map. It might be a good idea to treat them somewhat separately. My own suspicion is that there is something akin to the g factor for being rational, and of course, the g factor itself is highly relevant. So in my mind, I think making normal people “rational” might not even be possible. Sure, (immense) improvement is possible, but I doubt most people will come to “our” ways. For one, epistemic rationality often makes me one worse off by default, especially in more “normal” social settings. I have often contrasted my father’s intelligent irrationality with my own rationality, and he usually comes much ahead.