For most propositions this is straightforward enough, but is insufficient when infinite or near-infinite utility is being ascribed to such behavior (as advocates of various gods routinely do)… human brains are not well-calibrated enough to perform sensible expected value calculations even on rare events with large utility shifts (which is one reason lotteries remain in business), let alone on vanishingly unlikely events with vast utility shifts. So when faced with propositions about vanishingly unlikely events with vast utility shifts, I’m justified in being skeptical about even performing an expected value calculation on them, if the chances of my having undue confidence in my result are higher than the chances of my getting the right result.
The inverted Pascal’s Wager.
or
Did you know that the first Matrix was designed to be a perfect human world? Where none suffered, where everyone would be happy. It was a disaster. No one would accept the program.
The inverted Pascal’s Wager.
or