So, backing up, let me put forth my biggest objections to your idea, as I see it. I will try to stick to only arguing about this point until we can reach a consensus.
I do not believe there is anything so bad that you would trade $5 to prevent it from happening with probability 10^(-500). If there is, please let me know. If not, then this is a statement that is independent of your original priors, and which implies (as noted before) that your utility function is bounded.
I concede that the condition u(X) = O(1/p(X)) implies that one would be immune to the classical version of the Pascal’s mugging problem. What I am trying to say now is that it fails to be immune to other variants of Pascal’s mugging that would still be undesirable. While a good decision theory should certainly be immune to [the classical] Pascal’s mugging, a failure to be immune to other mugging variants still raises issues.
My claim (which I supported with math above) is that the only way to be immune to all variants of Pascal’s mugging is to have a bounded utility function.
My stronger claim, in case you agree with all of the above but think it is irrelevant, is that all humans have a bounded utility function. But let’s avoid arguing about this point until we’ve resolved all of the issues in the preceding paragraphs.
I’m a little suspicious talking about “the utility function” of a human being. We are messy biological creatures whose behavior is determined, most directly, by electrochemical stuff and not economic stuff. Our preferences are not consistent from minute-to-minute, and there is a lot of inconsistency between our stated and revealed preferences. We are very bad at computing probabilities. And so on. It’s better to speak of a given utility function approximating the preferences of a given human being. I think we can (we have to) leave this notion vague and still make progress.
My stronger claim, in case you agree with all of the above but think it is irrelevant, is that all humans have a bounded utility function.
I think that this is plausible. In the vaguer language of 0., we could wonder if “any utility function that approximates the preferences of a human being is bounded.” The partner of this claim, that events with probability 10^(-500) can’t happen, is also plausible. For instance, they would both follow from any kind of ultrafinitism. But however plausible we find it, none of us yet know whether it’s the case, so it’s valuable to consider alternatives.
Write X for a terrible thing (if you prefer the philanthropy version, wonderful thing) that has probability 10^(-500). To pay 5$ to prevent X means by revealed preference that |U(X)| > 5*10^(500). Part of Komponisto’s proposal is that, for a certain kind of utility function, this would imply that X is very complicated—too complicated for him to write down. So he couldn’t prove to you (not in this medium!) that so-and-so’s utility function can take values this high by describing an example of something that terrible. It doesn’t follow that U(X) is always small—especially not if we remain agnostic about ultrafinitism.
So, backing up, let me put forth my biggest objections to your idea, as I see it. I will try to stick to only arguing about this point until we can reach a consensus.
I do not believe there is anything so bad that you would trade $5 to prevent it from happening with probability 10^(-500). If there is, please let me know. If not, then this is a statement that is independent of your original priors, and which implies (as noted before) that your utility function is bounded.
I concede that the condition u(X) = O(1/p(X)) implies that one would be immune to the classical version of the Pascal’s mugging problem. What I am trying to say now is that it fails to be immune to other variants of Pascal’s mugging that would still be undesirable. While a good decision theory should certainly be immune to [the classical] Pascal’s mugging, a failure to be immune to other mugging variants still raises issues.
My claim (which I supported with math above) is that the only way to be immune to all variants of Pascal’s mugging is to have a bounded utility function.
My stronger claim, in case you agree with all of the above but think it is irrelevant, is that all humans have a bounded utility function. But let’s avoid arguing about this point until we’ve resolved all of the issues in the preceding paragraphs.
I’m a little suspicious talking about “the utility function” of a human being. We are messy biological creatures whose behavior is determined, most directly, by electrochemical stuff and not economic stuff. Our preferences are not consistent from minute-to-minute, and there is a lot of inconsistency between our stated and revealed preferences. We are very bad at computing probabilities. And so on. It’s better to speak of a given utility function approximating the preferences of a given human being. I think we can (we have to) leave this notion vague and still make progress.
I think that this is plausible. In the vaguer language of 0., we could wonder if “any utility function that approximates the preferences of a human being is bounded.” The partner of this claim, that events with probability 10^(-500) can’t happen, is also plausible. For instance, they would both follow from any kind of ultrafinitism. But however plausible we find it, none of us yet know whether it’s the case, so it’s valuable to consider alternatives.
Write X for a terrible thing (if you prefer the philanthropy version, wonderful thing) that has probability 10^(-500). To pay 5$ to prevent X means by revealed preference that |U(X)| > 5*10^(500). Part of Komponisto’s proposal is that, for a certain kind of utility function, this would imply that X is very complicated—too complicated for him to write down. So he couldn’t prove to you (not in this medium!) that so-and-so’s utility function can take values this high by describing an example of something that terrible. It doesn’t follow that U(X) is always small—especially not if we remain agnostic about ultrafinitism.