A partial response to your criticisms would be that, even if you do have truth as the criterion for your beliefs, then this still leaves the truth value of a wide range of beliefs underdetermined; so one postrational approach would be to use truth as the primary criterion for forming beliefs, and then use other criteria for filling in the beliefs which the criterion of truth doesn’t help us distinguish between.
And I am saying that this position is wrong. I am saying that there is no special underdetermination. I am saying that there is no problem with using truth as the only criterion for beliefs. I am saying, therefore, that there is no reason to use any other criteria for belief, and that “postrationality” as you describe it is unmotivated.
Okay. I should probably give a more concrete example of what I mean.
First, here’s a totally ordinary situation that has nothing to do with postrationality: me deciding whether I want to have a Pepsi or a Coke. From my point of view as the decision-maker, there’s no epistemically correct or incorrect action; it’s up to my preferences to choose one or the other. From the point of view of instrumental rationality, there is of course a correct action: whichever drink best satisfies my preferences at that moment. But epistemic rationality does not tell me which one I should choose; that’s in the domain of instrumental rationality.
My claim is that there are analogous situations where the decision you are making is “what should I believe”, where epistemic rationality does not offer an opinion one way or the other; the only criteria comes from instrumental rationality, where it would be irrational not to choose the one that best fulfilled your preferences.
As an example of such a situation, say that you are about to give a talk to some audience. Let’s assume that you are basically well prepared, facing a non-hostile audience etc., so there is no external reason for why this talk would need to go badly. The one thing that most matters is how confident you are in giving the talk, which in turn depends on how you believe it will go:
If you believe that this talk will go badly, then you will be nervous and stammer, and this talk will go badly.
If you believe that this talk will go well, then you will be confident and focused on your message, and the talk will go well.
Suppose for the sake of example that you could just choose which belief you have, and also that you know what effects this will have. In that case, even though you are choosing which belief to have, from the point of view of epistemic rationality, they are both equally valid. If you choose to believe that the talk will go badly, then it will go badly, so the belief is epistemically valid; if you choose to believe that it will go well, then it will go well, so that belief is epistemically valid as well. The only criteria you get is the one from instrumental rationality: do you prefer your talk to go well, or do you prefer it to go badly?
From my point of view as the decision-maker, there’s no epistemically correct or incorrect action
I’m not sure I know what an “epistemically correct action” or an “epistemically incorrect action” is. Actions aren’t the kinds of things that can be “epistemically correct” or “epistemically incorrect”. This would seem to be a type error.
But epistemic rationality does not tell me which one I should choose; that’s in the domain of instrumental rationality.
Indeed…
My claim is that there are analogous situations where the decision you are making is “what should I believe”, where epistemic rationality does not offer an opinion one way or the other;
The epistemically correct belief is “the belief which is true” (or, of course, given uncertainty: “the belief which is most accurate, given available information”). This is always the case.
The one thing that most matters is how confident you are in giving the talk, which in turn depends on how you believe it will go
The correct belief, obviously, is:
“How the talk will go depends on how confident I am. If I am confident, then the talk will go well. If I am not confident, it will go badly.”
(Well, actually more like: “If I am confident, then the talk is more likely than not to go well. If I am not confident, it is is more likely than not to go badly.”)
Conditionalizing, I can then plug in my estimate of the probability that I will be confident.
If I am able to affect this probability—such as by deciding to be confident (if I have this ability), or by taking some other action (such as taking anxiolytic medication, imagining the audience naked, doing some exercise beforehand, etc.)—then, of course, I will do that.
I will then—if I feel like doing so—revise my probability estimate of my confidence, and, correspondingly, my probability estimate of the talk going well. Of course, this is not actually necessary, since it does not affect anything one way or the other.
Suppose for the sake of example that you could just choose which belief you have
As always, I would choose to have the most accurate belief, of course, as described above.
In that case, even though you are choosing which belief to have, from the point of view of epistemic rationality, they are both equally valid.
No, choosing to have any but the most accurate belief is epistemically incorrect.
The only criteria you get is the one from instrumental rationality: do you prefer your talk to go well, or do you prefer it to go badly?
Indeed not; our criterion is, as always, the one we get is the one from epistemic rationality, i.e. “have the most accurate beliefs”.
I am referring to this bit:
And, of course, this bit:
And I am saying that this position is wrong. I am saying that there is no special underdetermination. I am saying that there is no problem with using truth as the only criterion for beliefs. I am saying, therefore, that there is no reason to use any other criteria for belief, and that “postrationality” as you describe it is unmotivated.
Okay. I should probably give a more concrete example of what I mean.
First, here’s a totally ordinary situation that has nothing to do with postrationality: me deciding whether I want to have a Pepsi or a Coke. From my point of view as the decision-maker, there’s no epistemically correct or incorrect action; it’s up to my preferences to choose one or the other. From the point of view of instrumental rationality, there is of course a correct action: whichever drink best satisfies my preferences at that moment. But epistemic rationality does not tell me which one I should choose; that’s in the domain of instrumental rationality.
My claim is that there are analogous situations where the decision you are making is “what should I believe”, where epistemic rationality does not offer an opinion one way or the other; the only criteria comes from instrumental rationality, where it would be irrational not to choose the one that best fulfilled your preferences.
As an example of such a situation, say that you are about to give a talk to some audience. Let’s assume that you are basically well prepared, facing a non-hostile audience etc., so there is no external reason for why this talk would need to go badly. The one thing that most matters is how confident you are in giving the talk, which in turn depends on how you believe it will go:
If you believe that this talk will go badly, then you will be nervous and stammer, and this talk will go badly.
If you believe that this talk will go well, then you will be confident and focused on your message, and the talk will go well.
Suppose for the sake of example that you could just choose which belief you have, and also that you know what effects this will have. In that case, even though you are choosing which belief to have, from the point of view of epistemic rationality, they are both equally valid. If you choose to believe that the talk will go badly, then it will go badly, so the belief is epistemically valid; if you choose to believe that it will go well, then it will go well, so that belief is epistemically valid as well. The only criteria you get is the one from instrumental rationality: do you prefer your talk to go well, or do you prefer it to go badly?
I’m not sure I know what an “epistemically correct action” or an “epistemically incorrect action” is. Actions aren’t the kinds of things that can be “epistemically correct” or “epistemically incorrect”. This would seem to be a type error.
Indeed…
The epistemically correct belief is “the belief which is true” (or, of course, given uncertainty: “the belief which is most accurate, given available information”). This is always the case.
The correct belief, obviously, is:
“How the talk will go depends on how confident I am. If I am confident, then the talk will go well. If I am not confident, it will go badly.”
(Well, actually more like: “If I am confident, then the talk is more likely than not to go well. If I am not confident, it is is more likely than not to go badly.”)
Conditionalizing, I can then plug in my estimate of the probability that I will be confident.
If I am able to affect this probability—such as by deciding to be confident (if I have this ability), or by taking some other action (such as taking anxiolytic medication, imagining the audience naked, doing some exercise beforehand, etc.)—then, of course, I will do that.
I will then—if I feel like doing so—revise my probability estimate of my confidence, and, correspondingly, my probability estimate of the talk going well. Of course, this is not actually necessary, since it does not affect anything one way or the other.
As always, I would choose to have the most accurate belief, of course, as described above.
No, choosing to have any but the most accurate belief is epistemically incorrect.
Indeed not; our criterion is, as always, the one we get is the one from epistemic rationality, i.e. “have the most accurate beliefs”.