This requirement (large numbers that refer to sets have large kolmogorov complexity) is a weaker version of my and RichardKenneway’s versions of the anti-mugging axiom. However, it doesn’t work for all utility functions; for example, Clippy would still be vulnerable to Pascal’s Mugging if using this strategy, since he doesn’t care whether the paperclips are distinct.
Hm, that solution seems like the one I gave (ironically, on a Clippy post), where I said that if you’re allowed to posit these huge utilities from complex (and thus improbable) hypotheses, you also have to consider hypotheses that are just as complex but give the opposite utility. But in the link I gave, people seemed to find something wrong with it: specifically, that the mugger gives an epsilon of evidence favoring the “you should pay”-supporting hypotheses, making them come out ahead.
Arranging your probability estimates so that predictions of opposite utility cancel out is one way to satisfy the anti-mugging axiom. It’s not the only way to do so, though; you can also require that the prior probabilities of statements (without corresponding opposite-utility statements) shrink at least as fast as utilities grow. There’s no rule that says that similar statements with positive and negative utilities have to have the same prior probabilities, unless you introduce it specifically for the purpose of anti-mugging defense.
t’s not the only way to do so, though; you can also require that the prior probabilities of statements (without corresponding opposite-utility statements) shrink at least as fast as utilities grow.
My favored solution. Incidentally, if your prior shrinks faster, then you can still be vulnerable. The mugger can simply split his offer up into a billion smaller offers, which will avoid the penalty of big offers disproportionately being discounted. So unless you would reject every single mugging offer of any magnitude (in which case isn’t that kind of arbitrary?), the faster shrinking doesn’t buy you anything.
Incidentally, if your prior shrinks faster, then you can still be vulnerable. The mugger can simply split his offer up into a billion smaller offers, which will avoid the penalty of big offers disproportionately being discounted. So unless you would reject every single mugging offer of any magnitude (in which case isn’t that kind of arbitrary?), the faster shrinking doesn’t buy you anything.
I believe a set of smaller offers would imply the existence of a statement which aggregates them and violates this formalization of the anti-mugging axiom.
On the other hand, you can potentially be forced to search the space of all functions for the one that diverges, and it might be possible (I don’t know whether it is) to mug in a way that makes finding that function computationally hard.
I believe a set of smaller offers would imply the existence of a statement which aggregates them and violates this formalization of the anti-mugging axiom.
I take the aggregating thing as a constructive proof that that class of priors + utility function is vulnerable; your version just seems to put it another way. We agree on that part, I think.
Arranging your probability estimates so that predictions of opposite utility cancel out is one way to satisfy the anti-mugging axiom. … There’s no rule that says that similar statements with positive and negative utilities have to have the same prior probabilities, unless you introduce it specifically for the purpose of anti-mugging defense.
I believe there is such a rule, which doesn’t have to be introduced ad hoc, and which follows from the tenets of algorithmic information theory. Per the reasoning I gave in the linked post, an arbitrary complex conclusion you locate (like the one in Pascal’s mugging) necessarily has a corresponding conclusion of equal complexity, but with the right predicate(s) inverted so that the inferred utility is reversed.
Because (by assumption) the conclusion is reached through arbitrary reasoning, disentangled from any real-world observation, you need no additional complexity for a hypothesis that critically inverts the first one. Since no other evidence supports either conclusion, their probability weights are determined by their complexity, and are thus equal.
That’s why I don’t think you need to introduce this reasoning as an additional axiom. However, as a separate matter (and whether or not you need it as an axiom), I thought this argument was refuted by the fact that the mugger, simply through assertion, introduces an arbitrarily small amount of evidence favoring one hypothesis over its inverse. If it refutes the defense I gave in the link, it should work against the anti-mugging axiom you’re using as well.
This requirement (large numbers that refer to sets have large kolmogorov complexity) is a weaker version of my and RichardKenneway’s versions of the anti-mugging axiom. However, it doesn’t work for all utility functions; for example, Clippy would still be vulnerable to Pascal’s Mugging if using this strategy, since he doesn’t care whether the paperclips are distinct.
Hm, that solution seems like the one I gave (ironically, on a Clippy post), where I said that if you’re allowed to posit these huge utilities from complex (and thus improbable) hypotheses, you also have to consider hypotheses that are just as complex but give the opposite utility. But in the link I gave, people seemed to find something wrong with it: specifically, that the mugger gives an epsilon of evidence favoring the “you should pay”-supporting hypotheses, making them come out ahead.
So … what’s the deal?
Arranging your probability estimates so that predictions of opposite utility cancel out is one way to satisfy the anti-mugging axiom. It’s not the only way to do so, though; you can also require that the prior probabilities of statements (without corresponding opposite-utility statements) shrink at least as fast as utilities grow. There’s no rule that says that similar statements with positive and negative utilities have to have the same prior probabilities, unless you introduce it specifically for the purpose of anti-mugging defense.
My favored solution. Incidentally, if your prior shrinks faster, then you can still be vulnerable. The mugger can simply split his offer up into a billion smaller offers, which will avoid the penalty of big offers disproportionately being discounted. So unless you would reject every single mugging offer of any magnitude (in which case isn’t that kind of arbitrary?), the faster shrinking doesn’t buy you anything.
I believe a set of smaller offers would imply the existence of a statement which aggregates them and violates this formalization of the anti-mugging axiom.
On the other hand, you can potentially be forced to search the space of all functions for the one that diverges, and it might be possible (I don’t know whether it is) to mug in a way that makes finding that function computationally hard.
I take the aggregating thing as a constructive proof that that class of priors + utility function is vulnerable; your version just seems to put it another way. We agree on that part, I think.
I believe there is such a rule, which doesn’t have to be introduced ad hoc, and which follows from the tenets of algorithmic information theory. Per the reasoning I gave in the linked post, an arbitrary complex conclusion you locate (like the one in Pascal’s mugging) necessarily has a corresponding conclusion of equal complexity, but with the right predicate(s) inverted so that the inferred utility is reversed.
Because (by assumption) the conclusion is reached through arbitrary reasoning, disentangled from any real-world observation, you need no additional complexity for a hypothesis that critically inverts the first one. Since no other evidence supports either conclusion, their probability weights are determined by their complexity, and are thus equal.
That’s why I don’t think you need to introduce this reasoning as an additional axiom. However, as a separate matter (and whether or not you need it as an axiom), I thought this argument was refuted by the fact that the mugger, simply through assertion, introduces an arbitrarily small amount of evidence favoring one hypothesis over its inverse. If it refutes the defense I gave in the link, it should work against the anti-mugging axiom you’re using as well.
Thanks for the links; I seem to have missed that post.