A partial response to your criticisms would be that, even if you do have truth as the criterion for your beliefs, then this still leaves the truth value of a wide range of beliefs underdetermined; so one postrational approach would be to use truth as the primary criterion for forming beliefs, and then use other criteria for filling in the beliefs which the criterion of truth doesn’t help us distinguish between.
We can, it seems to me, divide what you say in the linked comment into two parts or aspects.
On the one hand, we have “the predictive processing thing”, as you put it. Well, it’s a lot of interesting speculation, and a potentially interesting perspective on some things. So far, at least, that’s all it is. Using it as any kind of basis for constructing a general epistemology is just about the dictionary definition of “premature”.
One the other hand, we have familiar scenarios like “I will go to the beach this evening”. These are quite commonplace and not at all speculative, so we certainly have to grapple with them.
At first blush, such a scenario seems like a challenge to the “truth as a basis for beliefs” view. Will I go to the beach this evening? Well, as you say—if I believe that I will, then I will, and if I don’t, then I won’t… how can I form an accurate belief, if its truth value is determined by whether I hold it?!
… is what someone might think, on a casual reading of your comment. But that’s not quite what you said, is it? Here’s the relevant bit:
Another way of putting it: what is the truth value of the belief “I will go to the beach this evening”? Well, if I go to the beach this evening, then it is true; if I don’t go to the beach this evening, it’s false. Its truth is determined by the actions of the agent, rather than the environment.
What is the difference between this, and what you said? Is merely the fact that “I will go to the beach this evening” is about the future, whereas “snow is white” is about the present? Are we saying that the problem is simply that the truth value of “I will go to the beach this evening” is as yet undetermined? Well, perhaps true enough, but then consider this:
“What is the truth value of the belief ‘it will rain this evening’? Well, if it rains this evening, then it is true; if it doesn’t rain this evening, it’s false.”
So this is about the future, and—like the belief about going to the beach—is, in some sense, “underdetermined by external reality” (at least, to the extent that the universe is subjectively non-deterministic). Of course, whether it rains this evening isn’t determined by your actions, but what difference does that make? Is the problem one of underdetermination, or agent-dependency? These are not the same problem!
Let’s return to my first example—“snow is white”—for a moment. Suppose that I hail from a tropical country, and have never seen snow (and have had no access to television, the internet, etc.). Is snow white? I have no idea. Now imagine that I am on a plane, which is taking me from my tropical homeland to, say, Murmansk, Russia. Once again, suppose I say:
“What is the truth value of the belief ‘snow is white’? Well, if snow is white, then it is true; if snow is not white, it’s false.”
For me (in this hypothetical scenario), there is no difference between this statement, and the one about it raining this evening. In both cases, there is some claim about reality. In both cases, I lack sufficient information to either accept the claim as true or reject it as false. In both cases, I expect that in just a few hours, I will acquire the relevant information (in the former case, my plane will touch down, and I will see snow for the first time, and observe it to be white, or not white; in the latter case, evening will come, and I will observe it raining, or not raining). And—in both cases—the truth of each respective belief will then come to be determined by external reality.
So the mere fact of some beliefs being “about the future” hardly justifies abandoning truth as a singular criterion for belief. As I’ve shown, there is little material difference between a belief that’s “about the future” and one that’s “about a part of the present concerning which we have insufficient information”. (And, by the way, we have perfectly familiar conceptual tools for dealing with such cases: subjective probability. What is the truth value of the belief “it will rain this evening”? But why have such beliefs? On Less Wrong, of all places, surely we know that it’s more proper to have beliefs that are more like “P(it will rain) = 0.25, P(it won’t rain) = 0.75”?)
So let’s set the underdetermination point aside. Might the question of agent-dependency trouble us more, and give us reason to question the solidity of truth as a basis for belief? Is there something significant to the fact that the truth value of the belief “I will go to the beach this evening” depends on my actions?
There is at least one (perhaps trivial) sense in which the answer is a firm “no”. So what if my actions determine whether this particular belief is true? My actions are part of reality, just like snow, just like rain. What makes them special?
Well—you might say—what makes my actions special is that they depend on my decisions, which depend (somehow) on my beliefs. If I come to believe that I will go to the beach, then this either is identical to, or unavoidably causes, my deciding to go to the beach; and deciding to go to the beach causes me to take the action of going to the beach. Thus my belief determines its own truth! Obviously it can’t be determined by its truth, in that case—that would be hopelessly circular!
Of course any philosopher worth his salt will find much to quarrel with, in that highly questionable account of decision-making. For example, “beliefs are prior to decisions” is necessary in order for there to be any circularity, and yet it is, at best, a supremely dubious axiom. Note that reversing that priority makes the circularity go away, leaving us with a naturalistic account of agent-dependent beliefs; free-will concerns remain, but those are not epistemological in nature.
And even free-will concerns evaporate if we adopt the perspective that decisions are not about changing the world, they are about learning what world you live in. If we take this view, then we are simply done: we have brought “I will go to the beach this evening” in line with “it will rain this evening”, which we have already seen to be no different from “snow is white”. All are simply beliefs about reality. As you gain more information about reality, each of these beliefs might be revealed to be true, or not true.
Very well, but suppose an account (like shminux’s) that leaves no room at all for decision-making is too radical for us to stomach. Suppose we reject it. Is there, then, something special about agent-dependent beliefs?
Let us consider again the belief that “I will go to the beach this evening”. Suppose I come to hold this belief (which, depending on which parts of the above logic we find convincing, either brings about, or is the result of, my decision to go to the beach this evening.) But suppose that this afternoon, a tsunami washes away all the sand, and the beach is closed. Now my earlier belief has turned out to be false—through no actions or decisions on my part!
“Nitpicking!”, you say. Of course unforeseen situations might change my plans. Anyway, what you really meant was something like “I will attempt to go to the beach this evening”. Surely, an agent’s attempt to take some action can fail; there is nothing significant about that!
But suppose that this afternoon, I come down with a cold. I no longer have any interest in beachgoing. Once again, my earlier belief has turned out to be false.
More nitpicking! What you really meant was “I will intend to go to the beach this evening, unless, of course, something happens that causes me to alter my plans.”
But suppose that evening comes, and I find that I just don’t feel like going to the beach, and I don’t. Nothing has happened to cause me to alter my plans, I just… don’t feel like it.
Bah! What you really meant was “I intend to go to the beach, and I will still intend it this evening, unless of course I don’t, for some reason, because surely I’m allowed to change my mind?”
But suppose that evening comes, and I find that not only do I not feel like going to the beach, I never really wanted to go to the beach in the first place. I thought I did, but now I realize I didn’t.
In summary:
There is nothing special about agent-dependent beliefs. They can turn out to be true. They can turn out to be false. That is all.
Conflating beliefs with intentions, decisions, or actions, is a mistake as unfortunate as it is elementary.
And forgetting about probability is, probably, most unfortunate of all.
I agree that whether or not the belief is about something that happens in the future is irrelevant (at least if we’re talking about physics-time; ScottG’s original post was specific to say that it was about logical time). I think that I also agree that shinmux’s view is a consistent way of looking at this. But as you say, if you did adopt that view, then we can’t really talk about how to make decisions in the first place, and it would be nice if we could. (Hmm, are we rejecting a true view because it’s not useful, in favor of trying to find a view which would be no less true but which would be more useful? That’s a postrationalist move right there...)
So that leaves us with your objection to the view where we do try to maintain decisions, and find agent-dependent beliefs problematic. I’m not sure I understand your objection there, however. At least to some extent you seem to be pointing at external circumstances which might affect our decision, but my original comment already noted that external circumstances do also play a role rather than the agent’s decision being the sole determinant.
I’m also curious about whether you disagree with the original post where my comment was posted, and ScottG’s argument that “the standard justifications of Bayesian probability are in a framework where the facts that you are uncertain about are not in any way affected by whether or not you believe them”, and that this renders standard Bayesian probability inapplicable. If you disagree with that, then it might be better to have this conversation in the comments of that post, where ScottG might chime in.
So that leaves us with your objection to the view where we do try to maintain decisions, and find agent-dependent beliefs problematic. I’m not sure I understand your objection there, however. At least to some extent you seem to be pointing at external circumstances which might affect our decision, but my original comment already noted that external circumstances do also play a role rather than the agent’s decision being the sole determinant.
I… don’t know that I can explain my point any better than I already have.
Perhaps I should note that there’s a sense in which “beliefs determine our actions” which I find to be true but uninteresting (at least in this context). This is the utterly banal sense of “if I believe that it is raining outside, then I will bring an umbrella when I go for a walk”—i.e., the sense in which all of our actions are, in one way or another, determined by our beliefs.
Of course, there is nothing epistemologically challenging about this; it is just the ordinary, default state of affairs. You said:
An important framing here is “your beliefs determine your actions, so how do you get the beliefs which cause the best actions”.
If the result of thinking like this is that you decide to adopt false beliefs in order for “better actions” to (allegedly) result than if you had only true beliefs, then this is foolishness; but there is no epistemological challenge here—no difficulty for the project of epistemic rationality. Beyond that, nshepperd’s comments elsethread have dealt with this aspect of the matter, and I have little to add.
The (alleged) difficulty lies with beliefs which not just (allegedly) determine our decisions, but whose truth value is, in turn, determined by our decisions. (For example, “I will go to the beach this evening”.)
But I have shown how there is not, in fact, any difficulty with those beliefs after all.
You said:
A partial response to [nshepperd’s] criticisms would be that, even if you do have truth as the criterion for your beliefs, then this still leaves the truth value of a wide range of beliefs underdetermined;
But I have shown how this is not the case.
Thus your response to nshepperd’s criticisms, it seems, turns out to be invalid.
but there is no epistemological challenge here—no difficulty for the project of epistemic rationality. Beyond that, nshepperd’s comments elsethread have dealt with this aspect of the matter, and I have little to add.
The (alleged) difficulty lies with beliefs which not just (allegedly) determine our decisions, but whose truth value is, in turn, determined by our decisions. (For example, “I will go to the beach this evening”.)
But I have shown how there is not, in fact, any difficulty with those beliefs after all.
Hmm. You repeatedly use the word “difficulty”. Are you interpreting me to be saying that this would pose some kind of an insurmountable challenge for standard epistemic rationality? I was trying to say the opposite; that unlike what nshepperd was suggesting, this is perfectly in line and compatible with standard epistemic rationality.
A partial response to your criticisms would be that, even if you do have truth as the criterion for your beliefs, then this still leaves the truth value of a wide range of beliefs underdetermined; so one postrational approach would be to use truth as the primary criterion for forming beliefs, and then use other criteria for filling in the beliefs which the criterion of truth doesn’t help us distinguish between.
And I am saying that this position is wrong. I am saying that there is no special underdetermination. I am saying that there is no problem with using truth as the only criterion for beliefs. I am saying, therefore, that there is no reason to use any other criteria for belief, and that “postrationality” as you describe it is unmotivated.
Okay. I should probably give a more concrete example of what I mean.
First, here’s a totally ordinary situation that has nothing to do with postrationality: me deciding whether I want to have a Pepsi or a Coke. From my point of view as the decision-maker, there’s no epistemically correct or incorrect action; it’s up to my preferences to choose one or the other. From the point of view of instrumental rationality, there is of course a correct action: whichever drink best satisfies my preferences at that moment. But epistemic rationality does not tell me which one I should choose; that’s in the domain of instrumental rationality.
My claim is that there are analogous situations where the decision you are making is “what should I believe”, where epistemic rationality does not offer an opinion one way or the other; the only criteria comes from instrumental rationality, where it would be irrational not to choose the one that best fulfilled your preferences.
As an example of such a situation, say that you are about to give a talk to some audience. Let’s assume that you are basically well prepared, facing a non-hostile audience etc., so there is no external reason for why this talk would need to go badly. The one thing that most matters is how confident you are in giving the talk, which in turn depends on how you believe it will go:
If you believe that this talk will go badly, then you will be nervous and stammer, and this talk will go badly.
If you believe that this talk will go well, then you will be confident and focused on your message, and the talk will go well.
Suppose for the sake of example that you could just choose which belief you have, and also that you know what effects this will have. In that case, even though you are choosing which belief to have, from the point of view of epistemic rationality, they are both equally valid. If you choose to believe that the talk will go badly, then it will go badly, so the belief is epistemically valid; if you choose to believe that it will go well, then it will go well, so that belief is epistemically valid as well. The only criteria you get is the one from instrumental rationality: do you prefer your talk to go well, or do you prefer it to go badly?
From my point of view as the decision-maker, there’s no epistemically correct or incorrect action
I’m not sure I know what an “epistemically correct action” or an “epistemically incorrect action” is. Actions aren’t the kinds of things that can be “epistemically correct” or “epistemically incorrect”. This would seem to be a type error.
But epistemic rationality does not tell me which one I should choose; that’s in the domain of instrumental rationality.
Indeed…
My claim is that there are analogous situations where the decision you are making is “what should I believe”, where epistemic rationality does not offer an opinion one way or the other;
The epistemically correct belief is “the belief which is true” (or, of course, given uncertainty: “the belief which is most accurate, given available information”). This is always the case.
The one thing that most matters is how confident you are in giving the talk, which in turn depends on how you believe it will go
The correct belief, obviously, is:
“How the talk will go depends on how confident I am. If I am confident, then the talk will go well. If I am not confident, it will go badly.”
(Well, actually more like: “If I am confident, then the talk is more likely than not to go well. If I am not confident, it is is more likely than not to go badly.”)
Conditionalizing, I can then plug in my estimate of the probability that I will be confident.
If I am able to affect this probability—such as by deciding to be confident (if I have this ability), or by taking some other action (such as taking anxiolytic medication, imagining the audience naked, doing some exercise beforehand, etc.)—then, of course, I will do that.
I will then—if I feel like doing so—revise my probability estimate of my confidence, and, correspondingly, my probability estimate of the talk going well. Of course, this is not actually necessary, since it does not affect anything one way or the other.
Suppose for the sake of example that you could just choose which belief you have
As always, I would choose to have the most accurate belief, of course, as described above.
In that case, even though you are choosing which belief to have, from the point of view of epistemic rationality, they are both equally valid.
No, choosing to have any but the most accurate belief is epistemically incorrect.
The only criteria you get is the one from instrumental rationality: do you prefer your talk to go well, or do you prefer it to go badly?
Indeed not; our criterion is, as always, the one we get is the one from epistemic rationality, i.e. “have the most accurate beliefs”.
I’m also curious about whether you disagree with the original post where my comment was posted, and ScottG’s argument that “the standard justifications of Bayesian probability are in a framework where the facts that you are uncertain about are not in any way affected by whether or not you believe them”, and that this renders standard Bayesian probability inapplicable. If you disagree with that, then it might be better to have this conversation in the comments of that post, where ScottG might chime in.
But as you say, if you did adopt that view, then we can’t really talk about how to make decisions in the first place, and it would be nice if we could. (Hmm, are we rejecting a true view because it’s not useful, in favor of trying to find a view which would be no less true but which would be more useful? That’s a postrationalist move right there...)
I did not say that.
I said nothing about “it would be nice if we could”, nor did I suggest that I both accept shminux’s view as true, and am willing to reject it due to the aforesaid “it would be nice…” consideration.
Please don’t put words in my mouth to make it seem like I actually already agree with you; that is extremely annoying.
A partial response to your criticisms would be that, even if you do have truth as the criterion for your beliefs, then this still leaves the truth value of a wide range of beliefs underdetermined; so one postrational approach would be to use truth as the primary criterion for forming beliefs, and then use other criteria for filling in the beliefs which the criterion of truth doesn’t help us distinguish between.
I find this view unconvincing, and here’s why.
We can, it seems to me, divide what you say in the linked comment into two parts or aspects.
On the one hand, we have “the predictive processing thing”, as you put it. Well, it’s a lot of interesting speculation, and a potentially interesting perspective on some things. So far, at least, that’s all it is. Using it as any kind of basis for constructing a general epistemology is just about the dictionary definition of “premature”.
One the other hand, we have familiar scenarios like “I will go to the beach this evening”. These are quite commonplace and not at all speculative, so we certainly have to grapple with them.
At first blush, such a scenario seems like a challenge to the “truth as a basis for beliefs” view. Will I go to the beach this evening? Well, as you say—if I believe that I will, then I will, and if I don’t, then I won’t… how can I form an accurate belief, if its truth value is determined by whether I hold it?!
… is what someone might think, on a casual reading of your comment. But that’s not quite what you said, is it? Here’s the relevant bit:
[emphasis mine]
This seems significant, and yet:
“What is the truth value of the belief ‘snow is white’? Well, if snow is white, then it is true; if snow is not white, it’s false.”
What is the difference between this, and what you said? Is merely the fact that “I will go to the beach this evening” is about the future, whereas “snow is white” is about the present? Are we saying that the problem is simply that the truth value of “I will go to the beach this evening” is as yet undetermined? Well, perhaps true enough, but then consider this:
“What is the truth value of the belief ‘it will rain this evening’? Well, if it rains this evening, then it is true; if it doesn’t rain this evening, it’s false.”
So this is about the future, and—like the belief about going to the beach—is, in some sense, “underdetermined by external reality” (at least, to the extent that the universe is subjectively non-deterministic). Of course, whether it rains this evening isn’t determined by your actions, but what difference does that make? Is the problem one of underdetermination, or agent-dependency? These are not the same problem!
Let’s return to my first example—“snow is white”—for a moment. Suppose that I hail from a tropical country, and have never seen snow (and have had no access to television, the internet, etc.). Is snow white? I have no idea. Now imagine that I am on a plane, which is taking me from my tropical homeland to, say, Murmansk, Russia. Once again, suppose I say:
“What is the truth value of the belief ‘snow is white’? Well, if snow is white, then it is true; if snow is not white, it’s false.”
For me (in this hypothetical scenario), there is no difference between this statement, and the one about it raining this evening. In both cases, there is some claim about reality. In both cases, I lack sufficient information to either accept the claim as true or reject it as false. In both cases, I expect that in just a few hours, I will acquire the relevant information (in the former case, my plane will touch down, and I will see snow for the first time, and observe it to be white, or not white; in the latter case, evening will come, and I will observe it raining, or not raining). And—in both cases—the truth of each respective belief will then come to be determined by external reality.
So the mere fact of some beliefs being “about the future” hardly justifies abandoning truth as a singular criterion for belief. As I’ve shown, there is little material difference between a belief that’s “about the future” and one that’s “about a part of the present concerning which we have insufficient information”. (And, by the way, we have perfectly familiar conceptual tools for dealing with such cases: subjective probability. What is the truth value of the belief “it will rain this evening”? But why have such beliefs? On Less Wrong, of all places, surely we know that it’s more proper to have beliefs that are more like “P(it will rain) = 0.25, P(it won’t rain) = 0.75”?)
So let’s set the underdetermination point aside. Might the question of agent-dependency trouble us more, and give us reason to question the solidity of truth as a basis for belief? Is there something significant to the fact that the truth value of the belief “I will go to the beach this evening” depends on my actions?
There is at least one (perhaps trivial) sense in which the answer is a firm “no”. So what if my actions determine whether this particular belief is true? My actions are part of reality, just like snow, just like rain. What makes them special?
Well—you might say—what makes my actions special is that they depend on my decisions, which depend (somehow) on my beliefs. If I come to believe that I will go to the beach, then this either is identical to, or unavoidably causes, my deciding to go to the beach; and deciding to go to the beach causes me to take the action of going to the beach. Thus my belief determines its own truth! Obviously it can’t be determined by its truth, in that case—that would be hopelessly circular!
Of course any philosopher worth his salt will find much to quarrel with, in that highly questionable account of decision-making. For example, “beliefs are prior to decisions” is necessary in order for there to be any circularity, and yet it is, at best, a supremely dubious axiom. Note that reversing that priority makes the circularity go away, leaving us with a naturalistic account of agent-dependent beliefs; free-will concerns remain, but those are not epistemological in nature.
And even free-will concerns evaporate if we adopt the perspective that decisions are not about changing the world, they are about learning what world you live in. If we take this view, then we are simply done: we have brought “I will go to the beach this evening” in line with “it will rain this evening”, which we have already seen to be no different from “snow is white”. All are simply beliefs about reality. As you gain more information about reality, each of these beliefs might be revealed to be true, or not true.
Very well, but suppose an account (like shminux’s) that leaves no room at all for decision-making is too radical for us to stomach. Suppose we reject it. Is there, then, something special about agent-dependent beliefs?
Let us consider again the belief that “I will go to the beach this evening”. Suppose I come to hold this belief (which, depending on which parts of the above logic we find convincing, either brings about, or is the result of, my decision to go to the beach this evening.) But suppose that this afternoon, a tsunami washes away all the sand, and the beach is closed. Now my earlier belief has turned out to be false—through no actions or decisions on my part!
“Nitpicking!”, you say. Of course unforeseen situations might change my plans. Anyway, what you really meant was something like “I will attempt to go to the beach this evening”. Surely, an agent’s attempt to take some action can fail; there is nothing significant about that!
But suppose that this afternoon, I come down with a cold. I no longer have any interest in beachgoing. Once again, my earlier belief has turned out to be false.
More nitpicking! What you really meant was “I will intend to go to the beach this evening, unless, of course, something happens that causes me to alter my plans.”
But suppose that evening comes, and I find that I just don’t feel like going to the beach, and I don’t. Nothing has happened to cause me to alter my plans, I just… don’t feel like it.
Bah! What you really meant was “I intend to go to the beach, and I will still intend it this evening, unless of course I don’t, for some reason, because surely I’m allowed to change my mind?”
But suppose that evening comes, and I find that not only do I not feel like going to the beach, I never really wanted to go to the beach in the first place. I thought I did, but now I realize I didn’t.
In summary:
There is nothing special about agent-dependent beliefs. They can turn out to be true. They can turn out to be false. That is all.
Conflating beliefs with intentions, decisions, or actions, is a mistake as unfortunate as it is elementary.
And forgetting about probability is, probably, most unfortunate of all.
I agree that whether or not the belief is about something that happens in the future is irrelevant (at least if we’re talking about physics-time; ScottG’s original post was specific to say that it was about logical time). I think that I also agree that shinmux’s view is a consistent way of looking at this. But as you say, if you did adopt that view, then we can’t really talk about how to make decisions in the first place, and it would be nice if we could. (Hmm, are we rejecting a true view because it’s not useful, in favor of trying to find a view which would be no less true but which would be more useful? That’s a postrationalist move right there...)
So that leaves us with your objection to the view where we do try to maintain decisions, and find agent-dependent beliefs problematic. I’m not sure I understand your objection there, however. At least to some extent you seem to be pointing at external circumstances which might affect our decision, but my original comment already noted that external circumstances do also play a role rather than the agent’s decision being the sole determinant.
I’m also curious about whether you disagree with the original post where my comment was posted, and ScottG’s argument that “the standard justifications of Bayesian probability are in a framework where the facts that you are uncertain about are not in any way affected by whether or not you believe them”, and that this renders standard Bayesian probability inapplicable. If you disagree with that, then it might be better to have this conversation in the comments of that post, where ScottG might chime in.
I… don’t know that I can explain my point any better than I already have.
Perhaps I should note that there’s a sense in which “beliefs determine our actions” which I find to be true but uninteresting (at least in this context). This is the utterly banal sense of “if I believe that it is raining outside, then I will bring an umbrella when I go for a walk”—i.e., the sense in which all of our actions are, in one way or another, determined by our beliefs.
Of course, there is nothing epistemologically challenging about this; it is just the ordinary, default state of affairs. You said:
If the result of thinking like this is that you decide to adopt false beliefs in order for “better actions” to (allegedly) result than if you had only true beliefs, then this is foolishness; but there is no epistemological challenge here—no difficulty for the project of epistemic rationality. Beyond that, nshepperd’s comments elsethread have dealt with this aspect of the matter, and I have little to add.
The (alleged) difficulty lies with beliefs which not just (allegedly) determine our decisions, but whose truth value is, in turn, determined by our decisions. (For example, “I will go to the beach this evening”.)
But I have shown how there is not, in fact, any difficulty with those beliefs after all.
You said:
But I have shown how this is not the case.
Thus your response to nshepperd’s criticisms, it seems, turns out to be invalid.
Hmm. You repeatedly use the word “difficulty”. Are you interpreting me to be saying that this would pose some kind of an insurmountable challenge for standard epistemic rationality? I was trying to say the opposite; that unlike what nshepperd was suggesting, this is perfectly in line and compatible with standard epistemic rationality.
I am referring to this bit:
And, of course, this bit:
And I am saying that this position is wrong. I am saying that there is no special underdetermination. I am saying that there is no problem with using truth as the only criterion for beliefs. I am saying, therefore, that there is no reason to use any other criteria for belief, and that “postrationality” as you describe it is unmotivated.
Okay. I should probably give a more concrete example of what I mean.
First, here’s a totally ordinary situation that has nothing to do with postrationality: me deciding whether I want to have a Pepsi or a Coke. From my point of view as the decision-maker, there’s no epistemically correct or incorrect action; it’s up to my preferences to choose one or the other. From the point of view of instrumental rationality, there is of course a correct action: whichever drink best satisfies my preferences at that moment. But epistemic rationality does not tell me which one I should choose; that’s in the domain of instrumental rationality.
My claim is that there are analogous situations where the decision you are making is “what should I believe”, where epistemic rationality does not offer an opinion one way or the other; the only criteria comes from instrumental rationality, where it would be irrational not to choose the one that best fulfilled your preferences.
As an example of such a situation, say that you are about to give a talk to some audience. Let’s assume that you are basically well prepared, facing a non-hostile audience etc., so there is no external reason for why this talk would need to go badly. The one thing that most matters is how confident you are in giving the talk, which in turn depends on how you believe it will go:
If you believe that this talk will go badly, then you will be nervous and stammer, and this talk will go badly.
If you believe that this talk will go well, then you will be confident and focused on your message, and the talk will go well.
Suppose for the sake of example that you could just choose which belief you have, and also that you know what effects this will have. In that case, even though you are choosing which belief to have, from the point of view of epistemic rationality, they are both equally valid. If you choose to believe that the talk will go badly, then it will go badly, so the belief is epistemically valid; if you choose to believe that it will go well, then it will go well, so that belief is epistemically valid as well. The only criteria you get is the one from instrumental rationality: do you prefer your talk to go well, or do you prefer it to go badly?
I’m not sure I know what an “epistemically correct action” or an “epistemically incorrect action” is. Actions aren’t the kinds of things that can be “epistemically correct” or “epistemically incorrect”. This would seem to be a type error.
Indeed…
The epistemically correct belief is “the belief which is true” (or, of course, given uncertainty: “the belief which is most accurate, given available information”). This is always the case.
The correct belief, obviously, is:
“How the talk will go depends on how confident I am. If I am confident, then the talk will go well. If I am not confident, it will go badly.”
(Well, actually more like: “If I am confident, then the talk is more likely than not to go well. If I am not confident, it is is more likely than not to go badly.”)
Conditionalizing, I can then plug in my estimate of the probability that I will be confident.
If I am able to affect this probability—such as by deciding to be confident (if I have this ability), or by taking some other action (such as taking anxiolytic medication, imagining the audience naked, doing some exercise beforehand, etc.)—then, of course, I will do that.
I will then—if I feel like doing so—revise my probability estimate of my confidence, and, correspondingly, my probability estimate of the talk going well. Of course, this is not actually necessary, since it does not affect anything one way or the other.
As always, I would choose to have the most accurate belief, of course, as described above.
No, choosing to have any but the most accurate belief is epistemically incorrect.
Indeed not; our criterion is, as always, the one we get is the one from epistemic rationality, i.e. “have the most accurate beliefs”.
I’m afraid I found that post incomprehensible.
I will have more to say later, but:
I did not say that.
I said nothing about “it would be nice if we could”, nor did I suggest that I both accept shminux’s view as true, and am willing to reject it due to the aforesaid “it would be nice…” consideration.
Please don’t put words in my mouth to make it seem like I actually already agree with you; that is extremely annoying.
Apologies, I misinterpreted you then.