With epistemic dangers, I think there is a choice between “confront” and “evade”.
Not a bid for further explanation, just flagging that I’m not sure what you actually mean by this, as in which concrete moves which correspond to each.
If a person is unwilling to take hits to their morale, they are unlikely to be wisely managing their morale and epistemics, and instead trading off too hard against their epistemics.
To me the empirical question is whether a person ought to be willing to take all possible hits to their morale for the sake of their epistemics. I have a consequentialist fear—and I think consequentialist means we’re necessarily talking empiricism—that any exceptions/compromizes may be catastrophic.
. . .
It’s possible there’s a kind of meta-debate here going on, with some people (including me) sometimes having underlying consequentialist/empirical beliefs that even engaging in consequentialist/empirical arguments about trading off against epistemics would have overall bad consequences and/or an empirical belief that anyone who would offer such arguments readily must not really care about epistemics because they’re not [naively] treating them as sacred enough [1].
I hadn’t formulated this in that way before, so I’m glad this post/discussion has helped me realize that arguably “it’s consequentialism/empiricism all the way up”, even if you ultimately claim that your epistemological consequentialism cashes out to some inviolable deontological rules.
[1] Not treating them as sacred enough therefore they don’t really care, therefore can’t be trusted—this is my instinctive reaction when encounter, say, post-rationalist arguments about needing to consider what’s useful, not just what’s true. Maybe it’s not always fair.
. . .
I had a revealing exchange with someone a few months ago about conversation norms on LessWrong. I was stating the necessity of considering the consequences of your speech and how that should factor into how one speaks. In course of that debate, they said [paraphrasing]:
“You’ re trying to get me to admit that I sometimes trade off things against truth, and once I’ve admitted that, we’re just “haggling over price”. Except, no.
I think this response was a mistake, not in least because their rigidity meant we couldn’t discuss different consequences of different policies or even what tradeoffs I thought I was making (fewer than they did). That discussion felt different than this post because it was mostly about what you say to others and how, but I see the analogy to even when you’re considering how people individually think.
So, I may maintain my suspicions, but I won’t say “except, no.”
arguments about needing to consider what’s useful, not just what’s true.
Absent examples it’s not clear how these trade off against each other. It seems like what’s useful is a subset of what’s true—offhandedly, I don’t know what color of flames are produced if cesium is burned (or what cesium is, if it burns, if the fumes would be harmful, etc.), but if I thought that might be useful knowledge in the future I’d seek it out.
Very much so.
Not a bid for further explanation, just flagging that I’m not sure what you actually mean by this, as in which concrete moves which correspond to each.
To me the empirical question is whether a person ought to be willing to take all possible hits to their morale for the sake of their epistemics. I have a consequentialist fear—and I think consequentialist means we’re necessarily talking empiricism—that any exceptions/compromizes may be catastrophic.
. . .
It’s possible there’s a kind of meta-debate here going on, with some people (including me) sometimes having underlying consequentialist/empirical beliefs that even engaging in consequentialist/empirical arguments about trading off against epistemics would have overall bad consequences and/or an empirical belief that anyone who would offer such arguments readily must not really care about epistemics because they’re not [naively] treating them as sacred enough [1].
I hadn’t formulated this in that way before, so I’m glad this post/discussion has helped me realize that arguably “it’s consequentialism/empiricism all the way up”, even if you ultimately claim that your epistemological consequentialism cashes out to some inviolable deontological rules.
[1] Not treating them as sacred enough therefore they don’t really care, therefore can’t be trusted—this is my instinctive reaction when encounter, say, post-rationalist arguments about needing to consider what’s useful, not just what’s true. Maybe it’s not always fair.
. . .
I had a revealing exchange with someone a few months ago about conversation norms on LessWrong. I was stating the necessity of considering the consequences of your speech and how that should factor into how one speaks. In course of that debate, they said [paraphrasing]:
I think this response was a mistake, not in least because their rigidity meant we couldn’t discuss different consequences of different policies or even what tradeoffs I thought I was making (fewer than they did). That discussion felt different than this post because it was mostly about what you say to others and how, but I see the analogy to even when you’re considering how people individually think.
So, I may maintain my suspicions, but I won’t say “except, no.”
Absent examples it’s not clear how these trade off against each other. It seems like what’s useful is a subset of what’s true—offhandedly, I don’t know what color of flames are produced if cesium is burned (or what cesium is, if it burns, if the fumes would be harmful, etc.), but if I thought that might be useful knowledge in the future I’d seek it out.