But I balk at the word “epistemic” being applied in cases where the guidance in question seems to be catering to cognitive biases rather than working to overcome them.
This is indeed a crux; I view this as not relevant to the question of whether a rule is called “epistemic” or not. I see it as less about whether you are “catering to” or trying to “overcome” cognitive biases in yourself or in your reader, and more about whether you’re accurately modeling the consequences of your actions.
Most of my post was arguing for dropping less context when applying this rule or using this term, but here I will actually argue for dropping more.
Facts about cognitive biases are ordinary facts about how minds work, and thus ordinary facts about how the world works, which you can use to make predictions about the consequences of your writing on other minds. Other, less debatable rules of epistemic conduct may follow from other kinds of facts about the world that have little or nothing to do with the category of cognitive biases, but I don’t see the use of distinguishing between whether a rule may be called epistemic or not based on which type of true facts it follows from.
The LLM example in the OP is intended to illustrate this point obliquely; another example which stretches my own view to its limit is the following:
Suppose including the word “fnorp” anywhere in your post would cause Omega to reconfigure the brains of your readers so that they automatically agreed with anything you said. I personally would then say that not including the word “fnorp” in your post is a good rule of epistemic conduct. But I can see how in this case, the actual rule might be “don’t do things which causally result in your readers having non-consensual brain surgery performed on them”, which is less clearly a rule about epistemics, and therefore the use of the word epistimic attached to this rule is less justified.
I’m not particularly interested in assigning blame to authors or readers when such rules are not followed, and indeed I make no claims about what the consequences should be, if any, for not following the purported rules.
Eliezer’s view (apparently) is that if you don’t follow the rules, you get one comment addressing a couple of your object level claims, and then no further engagement from him personally. That seems reasonable to me, but also not particularly relevant to the question of what to call such rules or how to decide whether someone is following them or not.
Eliezer’s view (apparently) is that if you don’t follow the rules, you get one comment addressing a couple of your object level claims, and then no further engagement from him personally. That seems reasonable to me
The problem with allowing yourself to do this sort of thing is that it creates an incentive to construct arbitrary “rules of epistemic conduct”, announcing them to nobody (or else making them very difficult to follow). Then you use non-compliance as an excuse to disengage from discussions and leave criticism unaddressed. If challenged, retort that it was not you who “defected” first, but your critics—see, look, they broke “the rules”! Surely you can’t be expected to treat with such rule-breakers?!
The result is that you just stop talking to anyone who disagrees with you. Oh, you might retort, rebut, rant, or debunk—but you don’t talk. And you certainly don’t listen.
Of course there is some degree of blatant “logical rudeness” which makes it impossible to engage productively with someone. And, at the same time, it’s not necessarily (and, indeed, not likely to be) worth your time to engage with all of your critics, regardless of how many “rules” they did or did not break.
But if you allow yourself to refuse engagement in response to non-compliance with arbitrary rules that you made up, you’re undermining your ability to benefit from engagement with people who disagree with you, and you’re reducing your credibility in the eyes of reasonable third parties—because you’re showing that you cannot be trusted to approach disagreement fairly.
This is indeed a crux; I view this as not relevant to the question of whether a rule is called “epistemic” or not. I see it as less about whether you are “catering to” or trying to “overcome” cognitive biases in yourself or in your reader, and more about whether you’re accurately modeling the consequences of your actions.
Conflating epistemics with considerations like this is deadly to epistemics. If we’re going to approach epistemic rationality in this fashion, then we ought to give up immediately, as any hope for successful truthseeking is utterly unjustified with such an approach.
This is indeed a crux; I view this as not relevant to the question of whether a rule is called “epistemic” or not. I see it as less about whether you are “catering to” or trying to “overcome” cognitive biases in yourself or in your reader, and more about whether you’re accurately modeling the consequences of your actions.
Most of my post was arguing for dropping less context when applying this rule or using this term, but here I will actually argue for dropping more.
Facts about cognitive biases are ordinary facts about how minds work, and thus ordinary facts about how the world works, which you can use to make predictions about the consequences of your writing on other minds. Other, less debatable rules of epistemic conduct may follow from other kinds of facts about the world that have little or nothing to do with the category of cognitive biases, but I don’t see the use of distinguishing between whether a rule may be called epistemic or not based on which type of true facts it follows from.
The LLM example in the OP is intended to illustrate this point obliquely; another example which stretches my own view to its limit is the following:
Suppose including the word “fnorp” anywhere in your post would cause Omega to reconfigure the brains of your readers so that they automatically agreed with anything you said. I personally would then say that not including the word “fnorp” in your post is a good rule of epistemic conduct. But I can see how in this case, the actual rule might be “don’t do things which causally result in your readers having non-consensual brain surgery performed on them”, which is less clearly a rule about epistemics, and therefore the use of the word epistimic attached to this rule is less justified.
I’m not particularly interested in assigning blame to authors or readers when such rules are not followed, and indeed I make no claims about what the consequences should be, if any, for not following the purported rules.
Eliezer’s view (apparently) is that if you don’t follow the rules, you get one comment addressing a couple of your object level claims, and then no further engagement from him personally. That seems reasonable to me, but also not particularly relevant to the question of what to call such rules or how to decide whether someone is following them or not.
The problem with allowing yourself to do this sort of thing is that it creates an incentive to construct arbitrary “rules of epistemic conduct”, announcing them to nobody (or else making them very difficult to follow). Then you use non-compliance as an excuse to disengage from discussions and leave criticism unaddressed. If challenged, retort that it was not you who “defected” first, but your critics—see, look, they broke “the rules”! Surely you can’t be expected to treat with such rule-breakers?!
The result is that you just stop talking to anyone who disagrees with you. Oh, you might retort, rebut, rant, or debunk—but you don’t talk. And you certainly don’t listen.
Of course there is some degree of blatant “logical rudeness” which makes it impossible to engage productively with someone. And, at the same time, it’s not necessarily (and, indeed, not likely to be) worth your time to engage with all of your critics, regardless of how many “rules” they did or did not break.
But if you allow yourself to refuse engagement in response to non-compliance with arbitrary rules that you made up, you’re undermining your ability to benefit from engagement with people who disagree with you, and you’re reducing your credibility in the eyes of reasonable third parties—because you’re showing that you cannot be trusted to approach disagreement fairly.
Conflating epistemics with considerations like this is deadly to epistemics. If we’re going to approach epistemic rationality in this fashion, then we ought to give up immediately, as any hope for successful truthseeking is utterly unjustified with such an approach.