‘Small enough’ here would have to be very much smaller than 1 in 100 for this argument to begin to apply. It would have to be ‘so small that it won’t happen before the heat death of the universe’ scale. I’m still not sure the argument works even in that case.
How small should x be? And if the argument does hold, are you going to have two different criteria for rational behavior—one with events where probability of positive outcome is 1-x and one that isn’t.
And also, from Nick Bostrom’s piece (formatting will be messed up):
Mugger: Good. Now we will do some maths. Let us say that the 10 livres that
you have in your wallet are worth to you the equivalent of one happy day.
Let’s call this quantity of good 1 Util. So I ask you to give up 1 Util. In return,
I could promise to perform the magic tomorrow that will give you an extra
10 quadrillion happy days, i.e. 10 quadrillion Utils. Since you say there is a 1
in 10 quadrillion probability that I will fulfil my promise, this would be a fair
deal. The expected Utility for you would be zero. But I feel generous this
evening, and I will make you a better deal: If you hand me your wallet, I will
perform magic that will give you an extra 1,000 quadrillion happy days
of life.
…
Pascal hands over his wallet [to the Mugger].
Of course, by your reasoning, you would hand your wallet. Bravo.
How small should x be? And if the argument does hold, are you going to have two different criteria for rational behavior—one with events where probability of positive outcome is 1-x and one that isn’t.
And also, from Nick Bostrom’s piece (formatting will be messed up):
Of course, by your reasoning, you would hand your wallet. Bravo.