“How are you doing” “Well”, and suppressing criticism, are both examples of optimism bias on a social scale. The social norms appear to be optimized for causing more positivity than negativity to be expressed. Thus, the socially accepted beliefs have optimism bias.
The argument about happiness is somewhat more complex. I think the functional role of happiness in a mind is to track how well things have gone recently, whether things are going better than expected, etc. So, “hacking” that to make it be high regardless of the actual situation (wireheading) would result in optimism bias. (I agree this is different in that, rather than suggesting people already have optimism bias, it suggests people are talking as if it is normative to have optimism bias)
“How are you doing” “Well”, and suppressing criticism, are both examples of optimism bias on a social scale. The social norms appear to be optimized for causing more positivity than negativity to be expressed. Thus, the socially accepted beliefs have optimism bias.
How are “social norms … optimized for causing more positivity than negativity to be expressed” an example of “someone … believ[ing] that they themselves are less likely to experience a negative event”? What is the relationship of the one to the other, even?
As far as the happiness thing, this is really quite speculative and far from obvious, and while I don’t have much desire to argue about the functional role of happiness, etc., I would suggest that taking it to be an example of optimism bias (or indicative of a preference for having optimism bias, etc.) is ill-advised.
It’s hard to disentangle the belief that things are currently going well from the belief that things will go well in the future, as present circumstances cause future circumstances. In general, a bias towards thinking things are going well right now, will cause a bias towards thinking things are going to go well in the future.
If someone is building a ship, and someone criticizes the ship for being unsafe, but this criticism is suppressed, that would result in optimism bias at a social scale, since it leads people to falsely believe the ship is safer than it actually is.
If I’m actually worried about getting fired, but answer “well” to “how are you doing”, then that would result in optimism bias on a social scale, since the socially accepted belief is falsely implying I’m not worried and my job is stable.
If someone is building a ship, and someone criticizes the ship for being unsafe, but this criticism is suppressed, that would result in optimism bias at a social scale, since it leads people to falsely believe the ship is safer than it actually is.
This seems to assume that absent suppression of criticism, people’s perceptions would be accurate.
My view is that people make better judgments with more information, generally (but not literally always), but not that they always make accurate judgments when they have more information. Suppressing criticism but not praise, in particular, is a move to intentionally miscalibrate/deceive the audience.
I think there might be something similar going in the group optimism bias vs individual, but that this depends somewhat on whether you accept the multi-agent model of mind.
In this case, I don’t think so. In the parable, each vassal individually wants to maintain a positive impression. Additionally, vassals coordinate with each other to praise and not criticize each other (developing social norms such as almost always claiming things are going well). These are both serving the goal of each vassal maintaining a positive impression.
I think I’m asking the same question of Said of, “how is this the same phenomenon as someone saying “I’m fine”, if not relying on [something akin to] the multi-agent model of mind? Otherwise it looks like it’s built out of quite different parts, even if they have some metaphorical similarities.
I am claiming something like a difference between implicit beliefs (which drive actions) and explicit narratives (which drive speech), and claiming that the explicit narratives are biased towards thinking things are going well.
This difference could be implemented through a combination of self-deception and other-deception. So it could result in people having explicit beliefs that are too optimistic, or explicitly lying in ways that result in the things said being too optimistic. (Self-deception might be considered an instance of a multi-agent theory of mind, but I don’t think it has to be; the explicit beliefs may be a construct rather than an agent)
Hmm, okay that makes sense. [I think there might be other models for what’s going on here but agree that this model is plausible and doesn’t require the multi-agent model]
“How are you doing” “Well”, and suppressing criticism, are both examples of optimism bias on a social scale. The social norms appear to be optimized for causing more positivity than negativity to be expressed. Thus, the socially accepted beliefs have optimism bias.
The argument about happiness is somewhat more complex. I think the functional role of happiness in a mind is to track how well things have gone recently, whether things are going better than expected, etc. So, “hacking” that to make it be high regardless of the actual situation (wireheading) would result in optimism bias. (I agree this is different in that, rather than suggesting people already have optimism bias, it suggests people are talking as if it is normative to have optimism bias)
How are “social norms … optimized for causing more positivity than negativity to be expressed” an example of “someone … believ[ing] that they themselves are less likely to experience a negative event”? What is the relationship of the one to the other, even?
As far as the happiness thing, this is really quite speculative and far from obvious, and while I don’t have much desire to argue about the functional role of happiness, etc., I would suggest that taking it to be an example of optimism bias (or indicative of a preference for having optimism bias, etc.) is ill-advised.
It’s hard to disentangle the belief that things are currently going well from the belief that things will go well in the future, as present circumstances cause future circumstances. In general, a bias towards thinking things are going well right now, will cause a bias towards thinking things are going to go well in the future.
If someone is building a ship, and someone criticizes the ship for being unsafe, but this criticism is suppressed, that would result in optimism bias at a social scale, since it leads people to falsely believe the ship is safer than it actually is.
If I’m actually worried about getting fired, but answer “well” to “how are you doing”, then that would result in optimism bias on a social scale, since the socially accepted belief is falsely implying I’m not worried and my job is stable.
This seems to assume that absent suppression of criticism, people’s perceptions would be accurate.
My view is that people make better judgments with more information, generally (but not literally always), but not that they always make accurate judgments when they have more information. Suppressing criticism but not praise, in particular, is a move to intentionally miscalibrate/deceive the audience.
I think there might be something similar going in the group optimism bias vs individual, but that this depends somewhat on whether you accept the multi-agent model of mind.
In this case, I don’t think so. In the parable, each vassal individually wants to maintain a positive impression. Additionally, vassals coordinate with each other to praise and not criticize each other (developing social norms such as almost always claiming things are going well). These are both serving the goal of each vassal maintaining a positive impression.
I think I’m asking the same question of Said of, “how is this the same phenomenon as someone saying “I’m fine”, if not relying on [something akin to] the multi-agent model of mind? Otherwise it looks like it’s built out of quite different parts, even if they have some metaphorical similarities.
I am claiming something like a difference between implicit beliefs (which drive actions) and explicit narratives (which drive speech), and claiming that the explicit narratives are biased towards thinking things are going well.
This difference could be implemented through a combination of self-deception and other-deception. So it could result in people having explicit beliefs that are too optimistic, or explicitly lying in ways that result in the things said being too optimistic. (Self-deception might be considered an instance of a multi-agent theory of mind, but I don’t think it has to be; the explicit beliefs may be a construct rather than an agent)
Hmm, okay that makes sense. [I think there might be other models for what’s going on here but agree that this model is plausible and doesn’t require the multi-agent model]