There would have to be statements X(n) such that the maximum over E of P(E|The mugger said X(n)) U(E|The mugger said X(n)) is unbounded in n.
Yes, and that is precisely what I said causes vulnerability to Pascal’s Mugging and should therefore be forbidden. Does your version of the anti-mugging axiom ensure that no such X exists, and can you prove it mathematically?
It does not ensure that no such X exists, but I think this scenario is outside the scope of your suggestion, which is expressed in terms of P(X) and U(X), rather than conditional probabilities and utilities.
What do you think of the other potential defect in a decision theory resulting from too strong an anti-mugging axiom: the inability to believe in the possibility of a sufficiently large amount of utility, regardless of any evidence?
Oh, so that’s where the confusion is coming from; the probabilities and utilities in my formulation are conditional, I just chose the notation poorly. Since X is a function of type number=>evidence-set, P(X(n)) means the probability of something (which I never assigned a variable name) given X(n), and U(X(n)) is the utility of that same thing given X. Giving that something a name, as in your notation, these would be P(E|X) and U(E|X).
Being unable to believe in sufficiently large amounts of utility regardless of any evidence would be very bad; we need to be careful not to phrase our anti-mugging defenses in ways that would do that. This is a problem with globally bounded utility functions, for example. I’m pretty sure that requiring all parameterized statements to produce expected utility that does not diverge to infinity as the parameter increases, does not cause any such problems.
Yes, and that is precisely what I said causes vulnerability to Pascal’s Mugging and should therefore be forbidden. Does your version of the anti-mugging axiom ensure that no such X exists, and can you prove it mathematically?
It does not ensure that no such X exists, but I think this scenario is outside the scope of your suggestion, which is expressed in terms of P(X) and U(X), rather than conditional probabilities and utilities.
What do you think of the other potential defect in a decision theory resulting from too strong an anti-mugging axiom: the inability to believe in the possibility of a sufficiently large amount of utility, regardless of any evidence?
Oh, so that’s where the confusion is coming from; the probabilities and utilities in my formulation are conditional, I just chose the notation poorly. Since X is a function of type number=>evidence-set, P(X(n)) means the probability of something (which I never assigned a variable name) given X(n), and U(X(n)) is the utility of that same thing given X. Giving that something a name, as in your notation, these would be P(E|X) and U(E|X).
Being unable to believe in sufficiently large amounts of utility regardless of any evidence would be very bad; we need to be careful not to phrase our anti-mugging defenses in ways that would do that. This is a problem with globally bounded utility functions, for example. I’m pretty sure that requiring all parameterized statements to produce expected utility that does not diverge to infinity as the parameter increases, does not cause any such problems.