I think I’m willing to concede that there is something of an empirical question about what works best for truth-seeking, as much as that feels like a dangerous statement to acknowledge. Though seemingly true, it feels like it’s something that people who try to get you commit bad epistemic moves like to raise [1].
There’s a tricky balance to maintain here. On one hand, we don’t want to commit bad epistemic moves. On the other hand, failing to acknowledge the empirical basis of something when the evidence of its being empirical is presented is itself a bad epistemic move.
With epistemic dangers, I think there is a choice between “confront” and “evade”. Both are dangerous. Confronting the danger might harm you epistemically, and is frequently the wrong idea — like “confronting” radiation. But evading the danger might harm you epistemically, and is also frequently wrong — like “evading” a treatable illness. Ultimately, whether to confront or evade is an empirical question.
Allowing questions of motivation to factor into one’s truth-seeking process feels most perilous to me, mostly as it seems too easy to claim one’s motivation will be affected adversely to justify any desired behavior. I don’t deny certain moves might destroy motivation, but it seems the risks of allowing such a fear to be a justification for changing behavior are much worse. Granted, that’s an empirical claim I’m making.
One good test here might be: Is a person willing to take hits to their morale for the sake of acquiring the truth? If a person is unwilling to take hits to their morale, they are unlikely to be wisely managing their morale and epistemics, and instead trading off too hard against their epistemics. Another good test might be: If the person avoids useful behavior X in order to maintain their motivation, do they have a plan to get to a state where they won’t have to avoid behavior X forever? If not, that might be a cause for concern.
With epistemic dangers, I think there is a choice between “confront” and “evade”.
Not a bid for further explanation, just flagging that I’m not sure what you actually mean by this, as in which concrete moves which correspond to each.
If a person is unwilling to take hits to their morale, they are unlikely to be wisely managing their morale and epistemics, and instead trading off too hard against their epistemics.
To me the empirical question is whether a person ought to be willing to take all possible hits to their morale for the sake of their epistemics. I have a consequentialist fear—and I think consequentialist means we’re necessarily talking empiricism—that any exceptions/compromizes may be catastrophic.
. . .
It’s possible there’s a kind of meta-debate here going on, with some people (including me) sometimes having underlying consequentialist/empirical beliefs that even engaging in consequentialist/empirical arguments about trading off against epistemics would have overall bad consequences and/or an empirical belief that anyone who would offer such arguments readily must not really care about epistemics because they’re not [naively] treating them as sacred enough [1].
I hadn’t formulated this in that way before, so I’m glad this post/discussion has helped me realize that arguably “it’s consequentialism/empiricism all the way up”, even if you ultimately claim that your epistemological consequentialism cashes out to some inviolable deontological rules.
[1] Not treating them as sacred enough therefore they don’t really care, therefore can’t be trusted—this is my instinctive reaction when encounter, say, post-rationalist arguments about needing to consider what’s useful, not just what’s true. Maybe it’s not always fair.
. . .
I had a revealing exchange with someone a few months ago about conversation norms on LessWrong. I was stating the necessity of considering the consequences of your speech and how that should factor into how one speaks. In course of that debate, they said [paraphrasing]:
“You’ re trying to get me to admit that I sometimes trade off things against truth, and once I’ve admitted that, we’re just “haggling over price”. Except, no.
I think this response was a mistake, not in least because their rigidity meant we couldn’t discuss different consequences of different policies or even what tradeoffs I thought I was making (fewer than they did). That discussion felt different than this post because it was mostly about what you say to others and how, but I see the analogy to even when you’re considering how people individually think.
So, I may maintain my suspicions, but I won’t say “except, no.”
arguments about needing to consider what’s useful, not just what’s true.
Absent examples it’s not clear how these trade off against each other. It seems like what’s useful is a subset of what’s true—offhandedly, I don’t know what color of flames are produced if cesium is burned (or what cesium is, if it burns, if the fumes would be harmful, etc.), but if I thought that might be useful knowledge in the future I’d seek it out.
There’s a tricky balance to maintain here. On one hand, we don’t want to commit bad epistemic moves. On the other hand, failing to acknowledge the empirical basis of something when the evidence of its being empirical is presented is itself a bad epistemic move.
With epistemic dangers, I think there is a choice between “confront” and “evade”. Both are dangerous. Confronting the danger might harm you epistemically, and is frequently the wrong idea — like “confronting” radiation. But evading the danger might harm you epistemically, and is also frequently wrong — like “evading” a treatable illness. Ultimately, whether to confront or evade is an empirical question.
One good test here might be: Is a person willing to take hits to their morale for the sake of acquiring the truth? If a person is unwilling to take hits to their morale, they are unlikely to be wisely managing their morale and epistemics, and instead trading off too hard against their epistemics. Another good test might be: If the person avoids useful behavior X in order to maintain their motivation, do they have a plan to get to a state where they won’t have to avoid behavior X forever? If not, that might be a cause for concern.
Very much so.
Not a bid for further explanation, just flagging that I’m not sure what you actually mean by this, as in which concrete moves which correspond to each.
To me the empirical question is whether a person ought to be willing to take all possible hits to their morale for the sake of their epistemics. I have a consequentialist fear—and I think consequentialist means we’re necessarily talking empiricism—that any exceptions/compromizes may be catastrophic.
. . .
It’s possible there’s a kind of meta-debate here going on, with some people (including me) sometimes having underlying consequentialist/empirical beliefs that even engaging in consequentialist/empirical arguments about trading off against epistemics would have overall bad consequences and/or an empirical belief that anyone who would offer such arguments readily must not really care about epistemics because they’re not [naively] treating them as sacred enough [1].
I hadn’t formulated this in that way before, so I’m glad this post/discussion has helped me realize that arguably “it’s consequentialism/empiricism all the way up”, even if you ultimately claim that your epistemological consequentialism cashes out to some inviolable deontological rules.
[1] Not treating them as sacred enough therefore they don’t really care, therefore can’t be trusted—this is my instinctive reaction when encounter, say, post-rationalist arguments about needing to consider what’s useful, not just what’s true. Maybe it’s not always fair.
. . .
I had a revealing exchange with someone a few months ago about conversation norms on LessWrong. I was stating the necessity of considering the consequences of your speech and how that should factor into how one speaks. In course of that debate, they said [paraphrasing]:
I think this response was a mistake, not in least because their rigidity meant we couldn’t discuss different consequences of different policies or even what tradeoffs I thought I was making (fewer than they did). That discussion felt different than this post because it was mostly about what you say to others and how, but I see the analogy to even when you’re considering how people individually think.
So, I may maintain my suspicions, but I won’t say “except, no.”
Absent examples it’s not clear how these trade off against each other. It seems like what’s useful is a subset of what’s true—offhandedly, I don’t know what color of flames are produced if cesium is burned (or what cesium is, if it burns, if the fumes would be harmful, etc.), but if I thought that might be useful knowledge in the future I’d seek it out.