One category of cases where self-deception might be (evolutionarily) adaptive would be for males to be over-confident about their chances to pick up a female for a one-night stand (or, alternative, over-confident about how pleasurable that dalliance would be, and/or about how little they would be emotionally hurt by a rejection of their advances).
Suppose that in reality the potential utility to the male of the 1-night stand (if the seduction works) is twice as much as the utility loss (if rejected) and the actual chances of success are 20%; in this case the male will never make such pick-up attempts if they have exactly correct estimations. Another male who self-deceives to believe their chances are 40% will try every time—and some of the time they’ll get the 1-nighter, and some of that time they’ll sire a baby and spread their genes. Thus, in such a situation, self-deceiving for over-confidence may be adaptive.
There is, of course, a rather large random/unknown component in the amortized present value of the amount of good any action of mine is going to do. Maybe my little contributions to Python and other open-source projects will be of some fractional help one day to somebody writing some really important programs—though more likely they won’t. Maybe my buying and bringing food to hungry children will enhance the diet, and thus facilitate the brain development, of somebody who one day will do something really important—though more likely it won’t. Landsburg in http://www.slate.com/id/2034/ argued for assessing the expected values of one’s charitable giving condiitonal on each feasible charity action, then focusing all available resources on that one action—no matter how uncertain the assessment. However, this optimizes only for the peak of the a posteriori distribution, ignores the big issue of radical (Knightian) uncertainty, etc, etc—so I don’t really buy it (though pondering and debating these issues HAS led me to focus my charitable activities more, as have other lines of reasoning).