This is similar to the formulation I gave here, but I don’t think your version works. You could construct a series of different sets of knowledge X(n) that differ only in that they have different numbers n plugged in, and a bounding function B(n) such that
for all n P(E|X(n))U(E|X(n)) < B(n), but
lim[n->inf] P(E|X(n))U(E|X(n)) = inf
Basically, the mugger gets around your bound by crafting a state of knowledge X for you.
I’m pretty sure the formulation given in my linked comment also protects against Pascal’s Reformed Mugger.
Basically, the mugger gets around your bound by crafting a state of knowledge X for you.
This is giving too much power to the hypothetical mugger. If he can make me believe (I should have called X prior belief rather than prior knowledge) anything he chooses, then I don’t have anything. My entire state of mind is what it is only at his whim. Did you intend something less than this?
One could strengthen the axiom by requiring a bound on P(E|X) U(E|X) uniform in both E and X. However, if utiilty is unbounded, this implies that there is an amount so great that I can never believe it is attainable, even if it is. A decision theory that a priori rules out belief in something that could be true is also flawed.
There would have to be statements X(n) such that the maximum over E of
P(E|The mugger said X(n)) U(E|The mugger said X(n)) is unbounded in n. I don’t see why there should be, even if the maximum over E of P(E|X) U(E|X) is unbounded in X.
There would have to be statements X(n) such that the maximum over E of P(E|The mugger said X(n)) U(E|The mugger said X(n)) is unbounded in n.
Yes, and that is precisely what I said causes vulnerability to Pascal’s Mugging and should therefore be forbidden. Does your version of the anti-mugging axiom ensure that no such X exists, and can you prove it mathematically?
It does not ensure that no such X exists, but I think this scenario is outside the scope of your suggestion, which is expressed in terms of P(X) and U(X), rather than conditional probabilities and utilities.
What do you think of the other potential defect in a decision theory resulting from too strong an anti-mugging axiom: the inability to believe in the possibility of a sufficiently large amount of utility, regardless of any evidence?
Oh, so that’s where the confusion is coming from; the probabilities and utilities in my formulation are conditional, I just chose the notation poorly. Since X is a function of type number=>evidence-set, P(X(n)) means the probability of something (which I never assigned a variable name) given X(n), and U(X(n)) is the utility of that same thing given X. Giving that something a name, as in your notation, these would be P(E|X) and U(E|X).
Being unable to believe in sufficiently large amounts of utility regardless of any evidence would be very bad; we need to be careful not to phrase our anti-mugging defenses in ways that would do that. This is a problem with globally bounded utility functions, for example. I’m pretty sure that requiring all parameterized statements to produce expected utility that does not diverge to infinity as the parameter increases, does not cause any such problems.
This is similar to the formulation I gave here, but I don’t think your version works. You could construct a series of different sets of knowledge X(n) that differ only in that they have different numbers n plugged in, and a bounding function B(n) such that
Basically, the mugger gets around your bound by crafting a state of knowledge X for you.
I’m pretty sure the formulation given in my linked comment also protects against Pascal’s Reformed Mugger.
This is giving too much power to the hypothetical mugger. If he can make me believe (I should have called X prior belief rather than prior knowledge) anything he chooses, then I don’t have anything. My entire state of mind is what it is only at his whim. Did you intend something less than this?
One could strengthen the axiom by requiring a bound on P(E|X) U(E|X) uniform in both E and X. However, if utiilty is unbounded, this implies that there is an amount so great that I can never believe it is attainable, even if it is. A decision theory that a priori rules out belief in something that could be true is also flawed.
He doesn’t get to make you believe anything he chooses; making you believe statements of the form “The mugger said X(n)” is entirely sufficient.
There would have to be statements X(n) such that the maximum over E of P(E|The mugger said X(n)) U(E|The mugger said X(n)) is unbounded in n. I don’t see why there should be, even if the maximum over E of P(E|X) U(E|X) is unbounded in X.
Yes, and that is precisely what I said causes vulnerability to Pascal’s Mugging and should therefore be forbidden. Does your version of the anti-mugging axiom ensure that no such X exists, and can you prove it mathematically?
It does not ensure that no such X exists, but I think this scenario is outside the scope of your suggestion, which is expressed in terms of P(X) and U(X), rather than conditional probabilities and utilities.
What do you think of the other potential defect in a decision theory resulting from too strong an anti-mugging axiom: the inability to believe in the possibility of a sufficiently large amount of utility, regardless of any evidence?
Oh, so that’s where the confusion is coming from; the probabilities and utilities in my formulation are conditional, I just chose the notation poorly. Since X is a function of type number=>evidence-set, P(X(n)) means the probability of something (which I never assigned a variable name) given X(n), and U(X(n)) is the utility of that same thing given X. Giving that something a name, as in your notation, these would be P(E|X) and U(E|X).
Being unable to believe in sufficiently large amounts of utility regardless of any evidence would be very bad; we need to be careful not to phrase our anti-mugging defenses in ways that would do that. This is a problem with globally bounded utility functions, for example. I’m pretty sure that requiring all parameterized statements to produce expected utility that does not diverge to infinity as the parameter increases, does not cause any such problems.