In general, 100% is always much more suspicious than 99.99%. For example, if you tell me that a machine you’ve built has a 99.99% chance of working I might be worried about overconfidence but in principle you could be right and if you show me enough justification I might agree. If you tell me it has a 100% chance of working then something very fishy is going on, most likely you are just lying.
For averages, it is a trivial theorem of finite probability theory that I have non-zero probability of receiving at least the average. When your infinite reasoning starts violating laws like that you lose your right to make use of the other laws, like expected utility theory, because you may have broken them in the process.
Infinity is not a real number. It violates at least one axiomatic principle of real numbers (that every non-zero number has a reciprocal). This means you can’t just go and use it in expected utility calculations, since the Von-Neumann and Mortenson proved their theorem while assuming real numbers would be used (and also assuming that there would only be finitely many outcomes).
I can’t articulate rigorously exactly what is going wrong beyond what I said above, because the reasoning I am criticising is itself non-rigorous. However, the basic idea is that an average works by various possible outcomes each sort of ‘pulling’ the outcome towards themselves. This doesn’t explain how it can get above everything, which suggests there must be a sort of hidden, probability zero pay-off infinity outcome doing the actual work (this also makes sense, there is indeed a possibility, with probability zero, that the pay-off will be infinite). My utility function doesn’t accept infinity pay-offs so I reject the offer.
We’re getting this infinity as a limit though, which means that we can approach the infinity case by perfectly reasonable cases. In the case of the St. Petersburg lottery, suppose that the lottery stops after N coin flips, but you get to choose N. In that case, you can still get your payout arbitrarily large by choosing N sufficiently high. “Arbitrarily large” seems like a well-behaved analogue of infinity.
In the case of the OP, I’m sure that if TimFreeman were a god, he would be reasonably accommodating about special requests such as “here’s $1, but please, if you’re a god, don’t flip the coin more than N times.” Suddenly, there’s no infinity, but by choosing N sufficiently high, you can make the arbitrarily large payout in the unlikely case that TimFreeman is a god counterbalance the certain loss of $1.
Okay, that is definitely more reasonable. It’s now essentially become analogous to a Pascal’s mugging, where a guy comes up to me in the street and says that if I give him £5 then he will give me whatever I ask in the unlikely event that he is God. So why waste time with a lottery, why not just say that?
I don’t have a really convincing answer, Pascal’s Mugging is a problem that needs to be solved, but I suspect I can find a decision-theory answer without needing to give up on what I want just because its not convenient.
The best I can manage right now is that there is a limit to how much I can specify in my lifetime, and the probability of Tim being God multiplied by that limit is too low to be worthwhile.
The reason the lottery is there is that you don’t have to specify N. Sure, if you do, it makes the scary infinities go away, but it seems natural that you shouldn’t improve your expected outcome by adding a limit on how much you can win, so it seems that the outcome you get is at least as good as any outcome you could specify by specifying N.
True, “seems natural” isn’t a good guideline, and in any case it’s obvious that there’s something fishy going on with our intuitions. However, if I had to point to something that’s probably wrong, it probably wouldn’t be the intuition that the infinite lottery is at least as good as any finite version.
In general, 100% is always much more suspicious than 99.99%. For example, if you tell me that a machine you’ve built has a 99.99% chance of working I might be worried about overconfidence but in principle you could be right and if you show me enough justification I might agree. If you tell me it has a 100% chance of working then something very fishy is going on, most likely you are just lying.
For averages, it is a trivial theorem of finite probability theory that I have non-zero probability of receiving at least the average. When your infinite reasoning starts violating laws like that you lose your right to make use of the other laws, like expected utility theory, because you may have broken them in the process.
Infinity is not a real number. It violates at least one axiomatic principle of real numbers (that every non-zero number has a reciprocal). This means you can’t just go and use it in expected utility calculations, since the Von-Neumann and Mortenson proved their theorem while assuming real numbers would be used (and also assuming that there would only be finitely many outcomes).
I can’t articulate rigorously exactly what is going wrong beyond what I said above, because the reasoning I am criticising is itself non-rigorous. However, the basic idea is that an average works by various possible outcomes each sort of ‘pulling’ the outcome towards themselves. This doesn’t explain how it can get above everything, which suggests there must be a sort of hidden, probability zero pay-off infinity outcome doing the actual work (this also makes sense, there is indeed a possibility, with probability zero, that the pay-off will be infinite). My utility function doesn’t accept infinity pay-offs so I reject the offer.
We’re getting this infinity as a limit though, which means that we can approach the infinity case by perfectly reasonable cases. In the case of the St. Petersburg lottery, suppose that the lottery stops after N coin flips, but you get to choose N. In that case, you can still get your payout arbitrarily large by choosing N sufficiently high. “Arbitrarily large” seems like a well-behaved analogue of infinity.
In the case of the OP, I’m sure that if TimFreeman were a god, he would be reasonably accommodating about special requests such as “here’s $1, but please, if you’re a god, don’t flip the coin more than N times.” Suddenly, there’s no infinity, but by choosing N sufficiently high, you can make the arbitrarily large payout in the unlikely case that TimFreeman is a god counterbalance the certain loss of $1.
Okay, that is definitely more reasonable. It’s now essentially become analogous to a Pascal’s mugging, where a guy comes up to me in the street and says that if I give him £5 then he will give me whatever I ask in the unlikely event that he is God. So why waste time with a lottery, why not just say that?
I don’t have a really convincing answer, Pascal’s Mugging is a problem that needs to be solved, but I suspect I can find a decision-theory answer without needing to give up on what I want just because its not convenient.
The best I can manage right now is that there is a limit to how much I can specify in my lifetime, and the probability of Tim being God multiplied by that limit is too low to be worthwhile.
The reason the lottery is there is that you don’t have to specify N. Sure, if you do, it makes the scary infinities go away, but it seems natural that you shouldn’t improve your expected outcome by adding a limit on how much you can win, so it seems that the outcome you get is at least as good as any outcome you could specify by specifying N.
True, “seems natural” isn’t a good guideline, and in any case it’s obvious that there’s something fishy going on with our intuitions. However, if I had to point to something that’s probably wrong, it probably wouldn’t be the intuition that the infinite lottery is at least as good as any finite version.