If you don’t think belief in god has real world impact on other beliefs you have then you have a very odd view of the situation. If god exists, we should want to believe he exists. If he doesn’t, we should want to believe he doesn’t exist. If we’re unsure, just shrugging and saying “we’ll never know” doesn’t get us anywhere closer to the truth.
True… but coming from the perspective of a former-believer, I can absolutely state that it’s a serious mind-fuck trying to answer the question with the utmost certainty. I can relate with the mentality, as it’s come to me in phases. I have delved into study and then simply burnt out because of how many subject areas this debate covers. See my running book list.
Don’t get me wrong—I still want to answer the question, but to a degree, I have become a bit “Bleh” about it as I just don’t know what will raise my confidence to such a degree that I can just live my life in peace until god himself comes down from the sky to tell me of his existence.
For the time being, I can simply state that I don’t believe.. but that’s about it. I find it unlikely, am not satisfied by the evidence, and think there’s some serious issues with Christianity in particular.
Then again, the uncertainty lingers in my mind and creates a bit of an obsession. It’s been hard for me to move on with my life—that causes me to research intensely, and that burns me out. This cycle brings me to my current state where I have tried to just accept that I simply don’t believe in god and that I find it faaaar more pleasurable to do woodworking and make friends very nice cribbage boards.
Does that make any sense? I just wanted to chime in from the point of view of someone in an odd situation. I may have wrongly assumed that you perhaps have been a non-believer for quite a while. For someone coming from relatively recent belief (1.25 years ago), I have experienced the frustrations of thinking “We’ll never know.”
The pattern “if X is true, we should want to believe X, if X is false, we should want to believe non-X” is perhaps a good rhetorical device, but I still wonder what it means practically (it would be easier without the “want to” parts). If it means “do not believe in falsities” then I agree. If it means “try to have a correct opinion about any question ever asked”, it’s clearly a poor advice for any agent with limited cognitive capacities.
Moreover you probably deny the existence of compartmentalisation. There are lots of religious believers who are generally sane, intelligent and right in most of their other beliefs.
If I believe something, and someone proves me wrong, what is my reaction to being proved wrong?
For most people and most subjects, it is negative… we don’t like it. Being proven wrong feels like a loss of status, a defeat. We talk about “losing” an argument, for example, which invokes the common understanding that it’s better to win than to lose.
I understand that pattern to be encouraging a different stance, in which if someone proves me wrong, I should instead thank them for giving me what I want: after all, X was false, so I wanted to believe not-X (though I didn’t know it), and now I do in fact believe not-X. Yay!
The problem I see is that once I know that X is false, I may be angry for losing the argument, but I already believe non-X. Somebody (Wittgenstein?) said that if there was a verb meaning “believing falsely”, it would have no first person singular.
Yes, once you’ve convinced me X is false, I believe non-X.
But I still have a choice: I can believe non-X and be happy about it, or believe non-X and be upset about it.
And that choice has consequences.
For example, I’m more likely in the future to seek out things that made me happy in the past, and less likely to seek out things that have upset me. So if being shown to be wrong upsets me, I’m less likely to seek it out in the future, which is a good way to stay wrong.
But I still have a choice: I can believe non-X and be happy about it, or believe non-X and be upset about it.
I really wonder about this—how much control do you think you have to believe ~X? I actually highly doubt that you do have a choice. You’re either convinced or you’re not and belief or non-belief is the end result, but you don’t choose which path you take.
Or perhaps you could clarify which beliefs you think fall in this category?
How about X = the sky is blue. Do you still think your statement holds?
You can believe the sky is blue and be happy
You can believe the sky is not blue and be unhappy
I don’t think you have a choice about believing that the sky is blue. Were you actually able to believe whatever you wanted about the matter… I’d possibly be a) impressed, b) fascinated, or c) quite concerned :)
What do you think? The relationship between choice and belief is quite interesting to me. I’ve written some about it HERE.
Edit: this whole comment was based on my imagination… I’m going to leave it anyway—I’ve made a mistake and been corrected. Sorry, TheOtherDave; I should have read more carefully.
Sorry, TheOtherDave; I should have read more carefully.
No worries.
And, just for the record, while I do think we have a fair amount of control over what we believe, I don’t think that it works the way you’re probably envisioning my having meant what you thought I said. In particular, I don’t think it’s a matter of exerting willpower over my beliefs, or anything like that.
If I wanted to change my belief about the color of the sky, I’d have to set up a series of circumstances that served as evidence for a different belief. (And, indeed, the sky is often grey, or white, or black. Especially in New England.) That would be tricky, and I don’t think I could do it in the real world, where the color of the sky is such a pervasive and blatant thing. But for a lot of less concrete beliefs, I’ve been able to change them by manipulating the kinds of situations I find myself in and what I attend to about those situations.
Come to that, this is more or less the same way I influence whether I’m happy or upset.
Interesting—that make sense… though do you think you’d need to somehow perform these acts subconsciously? I guess you clarified that the sky was too obvious, but when you first wrote this, I thought that it wouldn’t work if I had a “meta-awareness” of my “rigging” of circumstances to produce a given belief. I’d know I was trying to trick myself and thus it’d seem like a game.
But perhaps that’s why you retracted for such a clear-cut objective case?
I’ll ponder this more. I appreciate the comment. I’m sure I do this myself and often talk aloud to myself when I’m feeling something I think is irrational, say about being late to a meeting and feeling extremely self-criticizing or worrying about what others think. I kind of talk to myself and try to come the conclusion that what’s happened has happened and despite the setback which led me to be late, I’m doing the best I can now and thus shouldn’t be condemning myself.
IME, “subconsciously” doesn’t really enter into it… I’m not tricking myself into believing something (where I would have to be unaware of the trick for it to work), I’m setting up a series of situations that will demonstrate what I want to believe. It’s a little bit like training my dog, I guess… it works without reference to an explicit cognitive representation of what’s being learned.
But then, I’ve never tried to do this for something that I actually believe to be false, as opposed to something that I either believe to be true but react to emotionally as though it were false, or something where my confidence in its truth or falsehood is low and I’m artificially bolstering one of them for pragmatic reasons.
I do think it makes more sense now that you’ve added that it’s [most likely] something you already believe in but are not “emotionally aligned” with. I’d be interested in an example of using this to promote action in a near 50⁄50 truth/falsehood estimate situation.
My favorite example is sort of a degenerate case, and so might be more distracting than illustrative, but I’ll share it anyway: a programmer friend of mine has a utility on his desktop called “placebo.”
When executed, it prints out the following text over the course of about 15 seconds:
“Working.........Done.”
That’s all.
It’s something he uses when caught up in a complex project to remind himself that he can write code projects that work, and thereby to alter his confidence level in his ability to make this code project work.
This is, of course, ridiculous: his ability to write a three-line program that generates text on a screen has no meaningful relationship to his ability to make complicated code work as intended—that “placebo” runs is just as consistent with the current project failing as it is with it succeeding, and is therefore evidence of neither—and in any case running it for a second time doesn’t give him any new evidence that he didn’t already have. It’s purely a mechanism for irrationally changing his beliefs about his likely success. (That said, the choice of whether and when to use that mechanism can be a rational choice.)
That’s great, though it probably would be helpful to have a perhaps more pertinent/universal example of something to go along with your original explanation:
...I’m setting up a series of situations that will demonstrate what I want to believe.
I think I’m still a bit lost on what category of beliefs you would use this on. It seems like they are generally subjective sorts of “flexible” beliefs; nothing concerning empirical evidence. Is that right?
More like, “I want to be happy in all circumstances, and happiness is within my control, thus I will make myself believe that event x should increase my happiness.” (And then you go about “setting up a series of situations” that increases your happiness about X.)
I would say that “I will successfully complete this project” is an empirical belief, in the sense that there’s an expected observation if it’s true that differs from the expected observation if it’s false.
So if I set up a series of events (e.g., “placebo” executions) that alter my confidence level about that assertion, I am in fact modifying an empirical belief.
Would you disagree?
Anyway, the placebo example could be reframed as “I want to be confident about my success on this project, and my confidence is subject to my influence, thus I will act so as to increase my estimate that I’ll succeed.” And then I go about setting up a series of situations (e.g., “placebo” executions) that increase my estimate of success.
Which is similar to what you suggested, though not quite the same.
I would say that “I will successfully complete this project” is an empirical belief, in the sense that there’s an expected observation if it’s true that differs from the expected observation if it’s false.
Yes… I now see that I could have been much clearer. That belief is testable… but only after one or the other has occurred. I more meant that you’re adjusting beliefs prior to the existence of the empirical evidence needed to verify the binary outcome.
So, for you to increase/decrease your confidence toward an anticipated outcome, there’s actually some other empirical evidence supporting justification to increase anticipated success—for example, a history of completing projects of a similar skill level and time commitment/deadline.
So, in that case, it seems like we more or less agree—you’re adjusting malleable feelings to align with something you more or less know should be the case. I still don’t think (and don’t think you’re saying this either) that you’re malleably altering any beliefs about known empirical results themselves. Again, you already said about as much in discussing the color of the sky.
I guess to illustrate, substitute “this project” with “design a spacecraft suitable for sustaining human life to Pluto and back within one month.” My guess is that your description of how you set up a “series of situations that increase your estimate of success” would break down, and in such a case you would not consider it advantageous to increase your confidence in an anticipated outcome of success. Or would you? Would you say that it’s beneficial to always anticipate success, even if one has good reason to suspect upcoming failure? Or perhaps you only increase confidence in an outcome of success where you have good reason to already think such will occur.
In other words, you don’t arbitrarily increase your confidence level simply because it can be influenced; you increase it if and when there are some other factors in place that lead you to think that said confidence should be increased.
Agreed that, when testable propositions are involved, I use this as a mechanism for artificially adjusting my expectations of the results of an as-yet-unperformed test (or an already performed test whose results I don’t know).
Adjusting my existing knowledge of an already performed test would be… trickier. I’m not sure how I would do that, short of extreme measures like self=hypnosis.
Agreed that arbitrarily increasing my confidence level simply because I can is not a good idea, and therefore that, as you say, I increase it if there are other factors in place that lead me to think it’s a good idea.
That said, those “other factors” aren’t necessarily themselves evidence of likely success, which seems to be an implication of what you’re saying.
To pick an extreme example for illustrative purposes: suppose I estimate that if I charge the armed guards wholeheartedly, I will trigger a mass charge of other prisoners doing the same thing, resulting in most of us getting free and some of us getting killed by the guards, and that this the best available result. Suppose I also estimate that, if I charge the guards, I will likely be one of the ones who dies. Suppose, I further estimate that I am not sufficiently disciplined to be able to charge the guards wholeheartedly while believing I will die; if I do that, the result will be a diffident charge that will not trigger a mass charge.
Given all of that, I may choose to increase my confidence level in my own survival despite believing that my current confidence level is accurate, because I conclude that a higher confidence level is useful.
Of course, a perfect rationalist would not need such a mechanism, and if one were available I would happily share my reasoning with them and wait for them to charge the fence instead, but they are in short supply.
Of course, these sorts of scenarios are rare. But it’s actually not uncommon to enter into situations where I’ve never done X before, and I don’t really know how difficult X is, so a prior probability of 50% of success/failure seems reasonable… but I also suspect that entering the situation with a 50% estimate of success will make failure more likely than entering with an 85% estimate of success… so I artificially pick a higher prior, because it’s useful to do so.
So, while I would not say it’s always beneficial to anticipate success, I would say that it’s sometimes beneficial even if one has good reason to suspect failure.
Whether a trip to Pluto could ever be such an example, and how I might go about artificially raising my estimate of success in that case, and what the knock-on effects of doing so might be… I don’t know. I can’t think of a plausible scenario where it would be a good idea.
Well, this has been a lovely discussion. Thanks for the back and forth; I think we’re in agreement, and your last example was particularly helpful. I think we’ve covered that:
we’re not talking about arbitrarily increasing confidence for no reason (just because we can)
we’re also [probably] not talking about trying to increase belief in something contrary to evidence already known (increase belief in ~X when the evidence supports X). (This is actually the category I originally thought you referring to, hence my mention of “tricking” one’s self. But I think this category is now ruled out.)
this technique is primarily useful when emotions/motivations/feelings are not lining up with the expected outcome given available evidence (success is likely based on prior experience, but success doesn’t feel likely and this is actually increasing likelihood of failure)
there are even some situations when an expectation of failure would decrease some kind of utilitarian benefit and thus one needs to act as if success is more probable, even though it’s not (with the caveat that improving rationality would help this not be necessary)
TheOtherDave doesn’t think you have a choice whether you believe X or non-X, just how you feel about your beliefs. To use your analogy, the only choice is deciding whether the fact that (you believe that) the sky is blue makes you happy or not.
Doh! You are absolutely correct. I left out a “non” in the first clause and thought that the comment was on the “adjustability” of the belief, not the adjustability of the feelings about the inevitable belief. Whoops—thank you for the correction.
If you don’t think belief in god has real world impact on other beliefs you have then you have a very odd view of the situation. If god exists, we should want to believe he exists. If he doesn’t, we should want to believe he doesn’t exist. If we’re unsure, just shrugging and saying “we’ll never know” doesn’t get us anywhere closer to the truth.
True… but coming from the perspective of a former-believer, I can absolutely state that it’s a serious mind-fuck trying to answer the question with the utmost certainty. I can relate with the mentality, as it’s come to me in phases. I have delved into study and then simply burnt out because of how many subject areas this debate covers. See my running book list.
Don’t get me wrong—I still want to answer the question, but to a degree, I have become a bit “Bleh” about it as I just don’t know what will raise my confidence to such a degree that I can just live my life in peace until god himself comes down from the sky to tell me of his existence.
For the time being, I can simply state that I don’t believe.. but that’s about it. I find it unlikely, am not satisfied by the evidence, and think there’s some serious issues with Christianity in particular.
Then again, the uncertainty lingers in my mind and creates a bit of an obsession. It’s been hard for me to move on with my life—that causes me to research intensely, and that burns me out. This cycle brings me to my current state where I have tried to just accept that I simply don’t believe in god and that I find it faaaar more pleasurable to do woodworking and make friends very nice cribbage boards.
Does that make any sense? I just wanted to chime in from the point of view of someone in an odd situation. I may have wrongly assumed that you perhaps have been a non-believer for quite a while. For someone coming from relatively recent belief (1.25 years ago), I have experienced the frustrations of thinking “We’ll never know.”
The pattern “if X is true, we should want to believe X, if X is false, we should want to believe non-X” is perhaps a good rhetorical device, but I still wonder what it means practically (it would be easier without the “want to” parts). If it means “do not believe in falsities” then I agree. If it means “try to have a correct opinion about any question ever asked”, it’s clearly a poor advice for any agent with limited cognitive capacities.
Moreover you probably deny the existence of compartmentalisation. There are lots of religious believers who are generally sane, intelligent and right in most of their other beliefs.
If I believe something, and someone proves me wrong, what is my reaction to being proved wrong?
For most people and most subjects, it is negative… we don’t like it. Being proven wrong feels like a loss of status, a defeat. We talk about “losing” an argument, for example, which invokes the common understanding that it’s better to win than to lose.
I understand that pattern to be encouraging a different stance, in which if someone proves me wrong, I should instead thank them for giving me what I want: after all, X was false, so I wanted to believe not-X (though I didn’t know it), and now I do in fact believe not-X. Yay!
The problem I see is that once I know that X is false, I may be angry for losing the argument, but I already believe non-X. Somebody (Wittgenstein?) said that if there was a verb meaning “believing falsely”, it would have no first person singular.
I don’t quite see the problem.
Yes, once you’ve convinced me X is false, I believe non-X.
But I still have a choice: I can believe non-X and be happy about it, or believe non-X and be upset about it.
And that choice has consequences.
For example, I’m more likely in the future to seek out things that made me happy in the past, and less likely to seek out things that have upset me. So if being shown to be wrong upsets me, I’m less likely to seek it out in the future, which is a good way to stay wrong.
I really wonder about this—how much control do you think you have to believe ~X? I actually highly doubt that you do have a choice. You’re either convinced or you’re not and belief or non-belief is the end result, but you don’t choose which path you take.
Or perhaps you could clarify which beliefs you think fall in this category?
How about X = the sky is blue. Do you still think your statement holds?
You can believe the sky is blue and be happy
You can believe the sky is not blue and be unhappy
I don’t think you have a choice about believing that the sky is blue. Were you actually able to believe whatever you wanted about the matter… I’d possibly be a) impressed, b) fascinated, or c) quite concerned :)
What do you think? The relationship between choice and belief is quite interesting to me. I’ve written some about it HERE.
Edit: this whole comment was based on my imagination… I’m going to leave it anyway—I’ve made a mistake and been corrected. Sorry, TheOtherDave; I should have read more carefully.
No worries.
And, just for the record, while I do think we have a fair amount of control over what we believe, I don’t think that it works the way you’re probably envisioning my having meant what you thought I said. In particular, I don’t think it’s a matter of exerting willpower over my beliefs, or anything like that.
If I wanted to change my belief about the color of the sky, I’d have to set up a series of circumstances that served as evidence for a different belief. (And, indeed, the sky is often grey, or white, or black. Especially in New England.) That would be tricky, and I don’t think I could do it in the real world, where the color of the sky is such a pervasive and blatant thing. But for a lot of less concrete beliefs, I’ve been able to change them by manipulating the kinds of situations I find myself in and what I attend to about those situations.
Come to that, this is more or less the same way I influence whether I’m happy or upset.
Interesting—that make sense… though do you think you’d need to somehow perform these acts subconsciously? I guess you clarified that the sky was too obvious, but when you first wrote this, I thought that it wouldn’t work if I had a “meta-awareness” of my “rigging” of circumstances to produce a given belief. I’d know I was trying to trick myself and thus it’d seem like a game.
But perhaps that’s why you retracted for such a clear-cut objective case?
I’ll ponder this more. I appreciate the comment. I’m sure I do this myself and often talk aloud to myself when I’m feeling something I think is irrational, say about being late to a meeting and feeling extremely self-criticizing or worrying about what others think. I kind of talk to myself and try to come the conclusion that what’s happened has happened and despite the setback which led me to be late, I’m doing the best I can now and thus shouldn’t be condemning myself.
Kind of like that?
IME, “subconsciously” doesn’t really enter into it… I’m not tricking myself into believing something (where I would have to be unaware of the trick for it to work), I’m setting up a series of situations that will demonstrate what I want to believe. It’s a little bit like training my dog, I guess… it works without reference to an explicit cognitive representation of what’s being learned.
But then, I’ve never tried to do this for something that I actually believe to be false, as opposed to something that I either believe to be true but react to emotionally as though it were false, or something where my confidence in its truth or falsehood is low and I’m artificially bolstering one of them for pragmatic reasons.
Maybe I just need a concrete example :)
I do think it makes more sense now that you’ve added that it’s [most likely] something you already believe in but are not “emotionally aligned” with. I’d be interested in an example of using this to promote action in a near 50⁄50 truth/falsehood estimate situation.
Thanks for the continued discussion!
My favorite example is sort of a degenerate case, and so might be more distracting than illustrative, but I’ll share it anyway: a programmer friend of mine has a utility on his desktop called “placebo.”
When executed, it prints out the following text over the course of about 15 seconds:
“Working.........Done.”
That’s all.
It’s something he uses when caught up in a complex project to remind himself that he can write code projects that work, and thereby to alter his confidence level in his ability to make this code project work.
This is, of course, ridiculous: his ability to write a three-line program that generates text on a screen has no meaningful relationship to his ability to make complicated code work as intended—that “placebo” runs is just as consistent with the current project failing as it is with it succeeding, and is therefore evidence of neither—and in any case running it for a second time doesn’t give him any new evidence that he didn’t already have. It’s purely a mechanism for irrationally changing his beliefs about his likely success. (That said, the choice of whether and when to use that mechanism can be a rational choice.)
Awesome. Thank you for this. I feel so much more competent now after having done this.
That’s great, though it probably would be helpful to have a perhaps more pertinent/universal example of something to go along with your original explanation:
I think I’m still a bit lost on what category of beliefs you would use this on. It seems like they are generally subjective sorts of “flexible” beliefs; nothing concerning empirical evidence. Is that right?
More like, “I want to be happy in all circumstances, and happiness is within my control, thus I will make myself believe that event x should increase my happiness.” (And then you go about “setting up a series of situations” that increases your happiness about X.)
Am I remotely close?
I would say that “I will successfully complete this project” is an empirical belief, in the sense that there’s an expected observation if it’s true that differs from the expected observation if it’s false.
So if I set up a series of events (e.g., “placebo” executions) that alter my confidence level about that assertion, I am in fact modifying an empirical belief.
Would you disagree?
Anyway, the placebo example could be reframed as “I want to be confident about my success on this project, and my confidence is subject to my influence, thus I will act so as to increase my estimate that I’ll succeed.” And then I go about setting up a series of situations (e.g., “placebo” executions) that increase my estimate of success.
Which is similar to what you suggested, though not quite the same.
Yes… I now see that I could have been much clearer. That belief is testable… but only after one or the other has occurred. I more meant that you’re adjusting beliefs prior to the existence of the empirical evidence needed to verify the binary outcome.
So, for you to increase/decrease your confidence toward an anticipated outcome, there’s actually some other empirical evidence supporting justification to increase anticipated success—for example, a history of completing projects of a similar skill level and time commitment/deadline.
So, in that case, it seems like we more or less agree—you’re adjusting malleable feelings to align with something you more or less know should be the case. I still don’t think (and don’t think you’re saying this either) that you’re malleably altering any beliefs about known empirical results themselves. Again, you already said about as much in discussing the color of the sky.
I guess to illustrate, substitute “this project” with “design a spacecraft suitable for sustaining human life to Pluto and back within one month.” My guess is that your description of how you set up a “series of situations that increase your estimate of success” would break down, and in such a case you would not consider it advantageous to increase your confidence in an anticipated outcome of success. Or would you? Would you say that it’s beneficial to always anticipate success, even if one has good reason to suspect upcoming failure? Or perhaps you only increase confidence in an outcome of success where you have good reason to already think such will occur.
In other words, you don’t arbitrarily increase your confidence level simply because it can be influenced; you increase it if and when there are some other factors in place that lead you to think that said confidence should be increased.
Is that any clearer than mud? :)
Agreed that, when testable propositions are involved, I use this as a mechanism for artificially adjusting my expectations of the results of an as-yet-unperformed test (or an already performed test whose results I don’t know).
Adjusting my existing knowledge of an already performed test would be… trickier. I’m not sure how I would do that, short of extreme measures like self=hypnosis.
Agreed that arbitrarily increasing my confidence level simply because I can is not a good idea, and therefore that, as you say, I increase it if there are other factors in place that lead me to think it’s a good idea.
That said, those “other factors” aren’t necessarily themselves evidence of likely success, which seems to be an implication of what you’re saying.
To pick an extreme example for illustrative purposes: suppose I estimate that if I charge the armed guards wholeheartedly, I will trigger a mass charge of other prisoners doing the same thing, resulting in most of us getting free and some of us getting killed by the guards, and that this the best available result. Suppose I also estimate that, if I charge the guards, I will likely be one of the ones who dies. Suppose, I further estimate that I am not sufficiently disciplined to be able to charge the guards wholeheartedly while believing I will die; if I do that, the result will be a diffident charge that will not trigger a mass charge.
Given all of that, I may choose to increase my confidence level in my own survival despite believing that my current confidence level is accurate, because I conclude that a higher confidence level is useful.
Of course, a perfect rationalist would not need such a mechanism, and if one were available I would happily share my reasoning with them and wait for them to charge the fence instead, but they are in short supply.
Of course, these sorts of scenarios are rare. But it’s actually not uncommon to enter into situations where I’ve never done X before, and I don’t really know how difficult X is, so a prior probability of 50% of success/failure seems reasonable… but I also suspect that entering the situation with a 50% estimate of success will make failure more likely than entering with an 85% estimate of success… so I artificially pick a higher prior, because it’s useful to do so.
So, while I would not say it’s always beneficial to anticipate success, I would say that it’s sometimes beneficial even if one has good reason to suspect failure.
Whether a trip to Pluto could ever be such an example, and how I might go about artificially raising my estimate of success in that case, and what the knock-on effects of doing so might be… I don’t know. I can’t think of a plausible scenario where it would be a good idea.
Well, this has been a lovely discussion. Thanks for the back and forth; I think we’re in agreement, and your last example was particularly helpful. I think we’ve covered that:
we’re not talking about arbitrarily increasing confidence for no reason (just because we can)
we’re also [probably] not talking about trying to increase belief in something contrary to evidence already known (increase belief in ~X when the evidence supports X). (This is actually the category I originally thought you referring to, hence my mention of “tricking” one’s self. But I think this category is now ruled out.)
this technique is primarily useful when emotions/motivations/feelings are not lining up with the expected outcome given available evidence (success is likely based on prior experience, but success doesn’t feel likely and this is actually increasing likelihood of failure)
there are even some situations when an expectation of failure would decrease some kind of utilitarian benefit and thus one needs to act as if success is more probable, even though it’s not (with the caveat that improving rationality would help this not be necessary)
Does that about sum it up?
Thanks again.
Works for me!
Um, that’s not what he actually said, you know.
It’s even right there in the part you quoted.
TheOtherDave doesn’t think you have a choice whether you believe X or non-X, just how you feel about your beliefs. To use your analogy, the only choice is deciding whether the fact that (you believe that) the sky is blue makes you happy or not.
Doh! You are absolutely correct. I left out a “non” in the first clause and thought that the comment was on the “adjustability” of the belief, not the adjustability of the feelings about the inevitable belief. Whoops—thank you for the correction.