1) Might there be domains where there is a slight negative expected utility for accuracy of belief, at least at the levels of rationality attainable by humans now (see: discussions of the valley of bad rationality)? For a true master of the mature art of human rationality, a person who has a detailed self-model and very accurate probability estimates, there would presumably be no reason to fiddle with expectations; these would flow naturally from their beliefs about the world. But since I don’t yet have anything like that, maybe it’s a good idea for me to purposefully try to make myself believe that the future will be good.
Well, isn’t that a big theorized reason why we have a lot of our cognitive biases? That even if they lead to a less accurate understanding of the territory, they send signals to other people (who happen to share those biases) that tend to encourage them to help us? In an evolutionary environment, personal beliefs don’t have to be accurate; they just have to help ensure reproductive fitness.
is there ever a time we should try to make ourselves believe things that we don’t necessarily have a good reason to think are true?
This is less the problem than the part where we already believe lots of things that we don’t have a good reason to think are true. Pessimists have a tendency to demand a higher burden of proof for positive thoughts than negative ones. If they were just as skeptical of their negative beliefs, more of the positive would get through!
That is, it’s not that we have to add a bunch of beliefs in order to be positive, it’s that we need to stop believing all sorts of pessimistic things, or at least believing that they’re relevant, or that they’re going to be a disaster.
If a thing you’re pessimistic about isn’t under your control, for example, then there’s probably no point worrying about it. And if it is under your control, then you could focus on the part where you can do something.
The part where we struggle is when we (in effect) spend lots of time arguing over whether we control something or we don’t, neither believing the matter is fully in hand, nor willing to dismiss it as not worth worrying about/not in one’s control.
So one’s pessimistic objections tend to be phrased as if out of one’s control. If you think about accomplishing something, the objection might be an absolute like, “you’ll never pull that off”, instead of the more accurate belief of, “you’ll never pull that off if you don’t make some changes from what you did last time”.
Bottom line: it’s not about what’s true or false, but about which thoughts are relevant to load into working memory. Many true things are not useful, and many useful things are only approximately true.
Just a couple of points on this discussion, which I’m sure I walked in at the middle of:
(1) One thing it illustrates is the important difference between what one “should” believe in the sense of it being prudential in some way, versus a very different notion: what has or has not been sufficiently well probed to regard as warranted (e.g., as a solution to a problem, broadly conceived). Of course, if the problem happens to be “to promote luckiness”, a well-tested solution could turn out to be “don’t demand well-testedness, but think on the bright side.”
(2) What I think is missing from some of this discussion is the importance of authenticity. Keeping up with contacts, and all the other behaviors, if performed as part of a contrived plan will backfire.
No—not because it wouldn’t be helpful sometimes, but because it’s very difficult to successfully, knowingly self-deceive in that way, because ridding yourself of the knowledge that you are self deceiving would involve thought Suppression and humans aren’t very good at that. Your insecurity in the belief would shine through to your behavior.
When you identify a reason to behave differently, I think it is better to just attempt to alter behavior via methods other than self-rhetoric such as modifying the environment, habit modification, or trying to exert willpower.
In any case, I think the mood-enhancing sort of optimism about the future isn’t about believing that things will turn out okay, but about having a lower set point for how “okay” things have to turn out in order for you to be happy with the outcome. You can be quite pessimistic epistemically and still have this sort of emotional optimism, in which outcomes are accurately modeled and yet negative outcomes are perceived as less negative, while positive outcomes are perceived as more positive. It’s not that expectations change, but that failure is less painful and success is sweeter … behaviorally, of course, this is difficult to distinguish from an expectation shift. (Anyone want to devise a way?)
I’m not sure how one would go about acquiring this sort of optimism, though. For me, my levels of that sort of optimism seem to be a function of my general health and the absence of chronic external stress—a complex socio-biological thing that can’t necessarily be influenced by memetics alone.
Well, isn’t that a big theorized reason why we have a lot of our cognitive biases? That even if they lead to a less accurate understanding of the territory, they send signals to other people (who happen to share those biases) that tend to encourage them to help us? In an evolutionary environment, personal beliefs don’t have to be accurate; they just have to help ensure reproductive fitness.
Sure. Now, is there ever a time we should try to make ourselves believe things that we don’t necessarily have a good reason to think are true?
This is less the problem than the part where we already believe lots of things that we don’t have a good reason to think are true. Pessimists have a tendency to demand a higher burden of proof for positive thoughts than negative ones. If they were just as skeptical of their negative beliefs, more of the positive would get through!
That is, it’s not that we have to add a bunch of beliefs in order to be positive, it’s that we need to stop believing all sorts of pessimistic things, or at least believing that they’re relevant, or that they’re going to be a disaster.
If a thing you’re pessimistic about isn’t under your control, for example, then there’s probably no point worrying about it. And if it is under your control, then you could focus on the part where you can do something.
The part where we struggle is when we (in effect) spend lots of time arguing over whether we control something or we don’t, neither believing the matter is fully in hand, nor willing to dismiss it as not worth worrying about/not in one’s control.
So one’s pessimistic objections tend to be phrased as if out of one’s control. If you think about accomplishing something, the objection might be an absolute like, “you’ll never pull that off”, instead of the more accurate belief of, “you’ll never pull that off if you don’t make some changes from what you did last time”.
Bottom line: it’s not about what’s true or false, but about which thoughts are relevant to load into working memory. Many true things are not useful, and many useful things are only approximately true.
Just a couple of points on this discussion, which I’m sure I walked in at the middle of: (1) One thing it illustrates is the important difference between what one “should” believe in the sense of it being prudential in some way, versus a very different notion: what has or has not been sufficiently well probed to regard as warranted (e.g., as a solution to a problem, broadly conceived). Of course, if the problem happens to be “to promote luckiness”, a well-tested solution could turn out to be “don’t demand well-testedness, but think on the bright side.”
(2) What I think is missing from some of this discussion is the importance of authenticity. Keeping up with contacts, and all the other behaviors, if performed as part of a contrived plan will backfire.
No—not because it wouldn’t be helpful sometimes, but because it’s very difficult to successfully, knowingly self-deceive in that way, because ridding yourself of the knowledge that you are self deceiving would involve thought Suppression and humans aren’t very good at that. Your insecurity in the belief would shine through to your behavior.
When you identify a reason to behave differently, I think it is better to just attempt to alter behavior via methods other than self-rhetoric such as modifying the environment, habit modification, or trying to exert willpower.
In any case, I think the mood-enhancing sort of optimism about the future isn’t about believing that things will turn out okay, but about having a lower set point for how “okay” things have to turn out in order for you to be happy with the outcome. You can be quite pessimistic epistemically and still have this sort of emotional optimism, in which outcomes are accurately modeled and yet negative outcomes are perceived as less negative, while positive outcomes are perceived as more positive. It’s not that expectations change, but that failure is less painful and success is sweeter … behaviorally, of course, this is difficult to distinguish from an expectation shift. (Anyone want to devise a way?)
I’m not sure how one would go about acquiring this sort of optimism, though. For me, my levels of that sort of optimism seem to be a function of my general health and the absence of chronic external stress—a complex socio-biological thing that can’t necessarily be influenced by memetics alone.