You miss my point. I am objecting to those axioms. I don’t want to change my utility function. If God is real, perhaps he really could offer infinite reward or infinite punishment. You might really think murdering 3^^^^3 people is just that bad.
However these events have such low probability that I can safely choose to ignore them, and that’s a perfectly valid choice. Maximizing expected utility means you will almost certainly do worse in the real world than an agent that doesn’t.
That makes no sense in context, since continuity is equivalent to saying (roughly) ‘If you prefer staying on this side of the street to dying, but prefer something on the other side of the street to staying here, there exists some probability of death which is small enough to make you prefer crossing the street.’
This sounds almost exactly like what Houshalter is arguing in the great-grandparent (“these events have such low probability that I can safely choose to ignore them,”) so it can’t be the axiom s/he objects to.
I could see objecting to Completeness, since in fact our preferences may be ill-defined for some choices. I don’t know if rejecting this axiom could produce the desired result in Pascal’s Mugging, though, and I’d half expect it to cause all sorts of trouble elsewhere.
You miss my point. I am objecting to those axioms. I don’t want to change my utility function. If God is real, perhaps he really could offer infinite reward or infinite punishment. You might really think murdering 3^^^^3 people is just that bad.
However these events have such low probability that I can safely choose to ignore them, and that’s a perfectly valid choice. Maximizing expected utility means you will almost certainly do worse in the real world than an agent that doesn’t.
Which axiom do you reject?
Continuity, I would say.
That makes no sense in context, since continuity is equivalent to saying (roughly) ‘If you prefer staying on this side of the street to dying, but prefer something on the other side of the street to staying here, there exists some probability of death which is small enough to make you prefer crossing the street.’
This sounds almost exactly like what Houshalter is arguing in the great-grandparent (“these events have such low probability that I can safely choose to ignore them,”) so it can’t be the axiom s/he objects to.
I could see objecting to Completeness, since in fact our preferences may be ill-defined for some choices. I don’t know if rejecting this axiom could produce the desired result in Pascal’s Mugging, though, and I’d half expect it to cause all sorts of trouble elsewhere.
That sounds right, actually.
That for any bet with an infinitesimally small value of p, there is a value of u high enough that I would take it.
That’s not one of the axioms. In fact, none of the axioms mention u at all.
True, but they must imply it in order to imply the expected utility algorithm.