This is coherent with my experience. I’m pretty sure there are other problems solved by self-deception other than hostile telepaths. One other such problems solved by self-deception which I’m pretty sure I’ve seen in people is preserving motivation: if something is really important for me and I need to put in a lot of effort to make it happen and probability of success is very low (let’s say epsilon), and if know that the probability of success is epsilon would totally annihilate my motivation to work towards it, then maybe hiding to myself that low probability safeguards my motivation to put in all the work necessary.
Think here at a situation where there is an natural catastrophe, and someone’s loved one is caught below the rubble, the person refusing to believe that the person might be dead and doing everything to remove the rubble as fast as possible.
Maybe this is also where the planning fallacy comes from in some cases.
What exactly is your hypothesis? Is it something like:
P1) People are irrationally averse to actions that have a positive expected value and a low probability of success.
P2) Self-deception enables people to ignore the low probability of success.
C) Self-deception is adaptive.
I tried to test this reasoning by referencing the research that Daniel Kahneman (co-coiner of the term “planning fallacy”) has done about optimism. He has many criticisms of over-optimism among managers/executives, as well as more ordinary people (e.g. those who pursue self-employment).
However, he also notes that, for a given optimistic individual, their optimism may have a variety of personal, social, and societal benefits, ranging from good mood and health to inspiring leadership and economic innovation. He goes so far as to say, “If you are allowed one wish for your child, seriously consider wishing him or her optimism.”. (Thinking Fast and Slow, p. 255)
Altogether, I’m think I’m missing a subtlety that would enable me to deduce the circumstances in which a bias towards optimism would be beneficial. Given that, I’m unable to test your hypothesis.
This is coherent with my experience. I’m pretty sure there are other problems solved by self-deception other than hostile telepaths. One other such problems solved by self-deception which I’m pretty sure I’ve seen in people is preserving motivation: if something is really important for me and I need to put in a lot of effort to make it happen and probability of success is very low (let’s say epsilon), and if know that the probability of success is epsilon would totally annihilate my motivation to work towards it, then maybe hiding to myself that low probability safeguards my motivation to put in all the work necessary.
Think here at a situation where there is an natural catastrophe, and someone’s loved one is caught below the rubble, the person refusing to believe that the person might be dead and doing everything to remove the rubble as fast as possible.
Maybe this is also where the planning fallacy comes from in some cases.
What exactly is your hypothesis? Is it something like: P1) People are irrationally averse to actions that have a positive expected value and a low probability of success. P2) Self-deception enables people to ignore the low probability of success. C) Self-deception is adaptive.
I tried to test this reasoning by referencing the research that Daniel Kahneman (co-coiner of the term “planning fallacy”) has done about optimism. He has many criticisms of over-optimism among managers/executives, as well as more ordinary people (e.g. those who pursue self-employment).
However, he also notes that, for a given optimistic individual, their optimism may have a variety of personal, social, and societal benefits, ranging from good mood and health to inspiring leadership and economic innovation. He goes so far as to say, “If you are allowed one wish for your child, seriously consider wishing him or her optimism.”. (Thinking Fast and Slow, p. 255)
Altogether, I’m think I’m missing a subtlety that would enable me to deduce the circumstances in which a bias towards optimism would be beneficial. Given that, I’m unable to test your hypothesis.