In the post ‘Pascal’s Mugging: Tiny Probabilities of Vast Utilities’ Eliezer Yudkowsky writes, “I don’t feel I have a satisfactory resolution as yet”. This might sound like yet another circumstantial problem to be solved, with no relevance for rationality in general or the bulk of friendly AI research. But it is actually one the most important problems because it does undermine the basis of rational choice.
The evolved/experienced heuristic is probably to ignore those. In most cases where such things crop up, it is an attempted mugging—like it was with Pascal and God. So, most people just learn to tune such things out.
The evolved/experienced heuristic is probably to ignore those. In most cases where such things crop up, it is an attempted mugging—like it was with Pascal and God. So, most people just learn to tune such things out.