This would be taken by many as a concession of defeat, just as would saying “After conditionalization on your evidence I have lowered by credence in P”, and so will be hard for people to do in practice.
When I’m talking to people of that quality, I very rarely need to integrate substantial new evidence into my model.
If evidence is not substantial you have no obligation to make a big deal of the fact that you are updating on it; this would be a Gricean deception.
When someone does present you with substantial new evidence, you should consider that you may be dealing with one knowledgeable in the subject area; if so, conceding defeat, or showing before others that you have been presented with substantial new evidence, should not be out of the question. What categorical imperative would you want to apply for people who encounter substantial new evidence?
If the appearance of conceding defeat is (for whatever reason) terribly scary, then you may, perhaps, choose between acknowledging new evidence internally and saying nothing about this externally; or you may fail to acknowledge it even internally. Neither course of action is especially virtuous, but self-deception is not more virtuous than silence.
Other than moral reasons, what flaws are there in the course of action of arguing against the evidence whilst acknowledging internally that your opponent is right?
Off-hand here are three pragmatic costs of doing so, as stripped of moral language as I can get them:
1) Cognitive dissonance. For most of us, behavior influences belief, so behaving as though the presented evidence wasn’t compelling can (and likely will) interfere with our ability to properly incorporate that evidence in our thinking.
2a) Reputation. If I fail to signal internal state reliably, I may develop a reputation as an unreliable signaler. There are social costs to that, as well as nastier words for it.
2b) Reputation, again. Evidence of true things is a valuable thing to have. If someone gives it to me and I refuse to acknowledge it, I’m refusing to acknowledge a gift. There are social costs to that as well.
3) Operant conditioning opportunity costs. Making an argument that others find compelling is emotionally rewarding for most people. If the person you’re arguing with is one of those people, and you signal that you found their argument compelling, basic conditioning principles make it more likely that the next time they have evidence you ought to find compelling they’ll share it with you. Conversely, if you don’t signal it, they’re less likely to do it again. Therefore, continuing to argue as though the evidence were uncompelling means losing chances to get benefits later.
This would be taken by many as a concession of defeat, just as would saying “After conditionalization on your evidence I have lowered by credence in P”, and so will be hard for people to do in practice.
When I’m talking to people of that quality, I very rarely need to integrate substantial new evidence into my model.
If evidence is not substantial you have no obligation to make a big deal of the fact that you are updating on it; this would be a Gricean deception.
When someone does present you with substantial new evidence, you should consider that you may be dealing with one knowledgeable in the subject area; if so, conceding defeat, or showing before others that you have been presented with substantial new evidence, should not be out of the question. What categorical imperative would you want to apply for people who encounter substantial new evidence?
If the appearance of conceding defeat is (for whatever reason) terribly scary, then you may, perhaps, choose between acknowledging new evidence internally and saying nothing about this externally; or you may fail to acknowledge it even internally. Neither course of action is especially virtuous, but self-deception is not more virtuous than silence.
Other than moral reasons, what flaws are there in the course of action of arguing against the evidence whilst acknowledging internally that your opponent is right?
Off-hand here are three pragmatic costs of doing so, as stripped of moral language as I can get them:
1) Cognitive dissonance. For most of us, behavior influences belief, so behaving as though the presented evidence wasn’t compelling can (and likely will) interfere with our ability to properly incorporate that evidence in our thinking.
2a) Reputation. If I fail to signal internal state reliably, I may develop a reputation as an unreliable signaler. There are social costs to that, as well as nastier words for it.
2b) Reputation, again. Evidence of true things is a valuable thing to have. If someone gives it to me and I refuse to acknowledge it, I’m refusing to acknowledge a gift. There are social costs to that as well.
3) Operant conditioning opportunity costs. Making an argument that others find compelling is emotionally rewarding for most people. If the person you’re arguing with is one of those people, and you signal that you found their argument compelling, basic conditioning principles make it more likely that the next time they have evidence you ought to find compelling they’ll share it with you. Conversely, if you don’t signal it, they’re less likely to do it again. Therefore, continuing to argue as though the evidence were uncompelling means losing chances to get benefits later.