This appears to be (to my limited knowledge of what science knows a well-known bias. But like most biases, I think I can imagine occasions when it serves as a heuristic.
The thought occurred to me because I play miniature and card games—I see other commenters have also mentioned some games.
Let’s say, for example, I have a pair of cards that both give me X of something—let’s it deals a certain amount of damage, for those familiar with these games. One card gives me 4 of that something. The other gives me 1-8 over a uniform random distribution—maybe a die roll.
Experience players of these games will tell you that unless the random card gives you a higher expected value, you should play the certain card. And empirical evidence would seem to suggest that they know what they’re talking about, because these are the players who win games. What do they say if you ask them why? They say you can plan around the certain gain.
I think that notion is important here. If I have a gain that is certain, at least in any of these games, I can exploit it to its fullest potential—for a high final utility. I can lure my opponent into a trap because I know I can beat them, I can make an aggressive move that only works if I deal at least four damage—heck, the mere ability to trim down my informal Minimax tree is no small gain in a situation like this.
Dealing 4 damage without exploiting it has a much smaller end payoff. And sure, I could try to exploit the random effect in just the same way—I’ll get the same effect if I win my roll. But if I TRY to exploit that gain and FAIL, I’ll be punished severely. If you add in these values it skews the decision matrix quite a bit.
And none of this is to say that the gambling outcomes being used as examples above aren’t what they seem to be. But I’m wondering if humans are bad at these decisions partly because the ancestral environment contained many examples of situations like the one I’ve described. Trying to exploit a hunting technique that MIGHT work could get you eaten by a bear—a high negative utility hidden in that matrix. And this could lead, after natural selection, to humans who account for such ‘hidden’ downsides even when they don’t exist.
This appears to be (to my limited knowledge of what science knows a well-known bias. But like most biases, I think I can imagine occasions when it serves as a heuristic.
The thought occurred to me because I play miniature and card games—I see other commenters have also mentioned some games.
Let’s say, for example, I have a pair of cards that both give me X of something—let’s it deals a certain amount of damage, for those familiar with these games. One card gives me 4 of that something. The other gives me 1-8 over a uniform random distribution—maybe a die roll.
Experience players of these games will tell you that unless the random card gives you a higher expected value, you should play the certain card. And empirical evidence would seem to suggest that they know what they’re talking about, because these are the players who win games. What do they say if you ask them why? They say you can plan around the certain gain.
I think that notion is important here. If I have a gain that is certain, at least in any of these games, I can exploit it to its fullest potential—for a high final utility. I can lure my opponent into a trap because I know I can beat them, I can make an aggressive move that only works if I deal at least four damage—heck, the mere ability to trim down my informal Minimax tree is no small gain in a situation like this.
Dealing 4 damage without exploiting it has a much smaller end payoff. And sure, I could try to exploit the random effect in just the same way—I’ll get the same effect if I win my roll. But if I TRY to exploit that gain and FAIL, I’ll be punished severely. If you add in these values it skews the decision matrix quite a bit.
And none of this is to say that the gambling outcomes being used as examples above aren’t what they seem to be. But I’m wondering if humans are bad at these decisions partly because the ancestral environment contained many examples of situations like the one I’ve described. Trying to exploit a hunting technique that MIGHT work could get you eaten by a bear—a high negative utility hidden in that matrix. And this could lead, after natural selection, to humans who account for such ‘hidden’ downsides even when they don’t exist.