The money pump is but an illustration, not the one true definitive argument for the standard decision theory. For even if this particular Allais gamble isn’t repeated, you’re going to make many, many more decisions under uncertainty in your life (which job to take, what to study, where to live, &c.). Choosing the option with highest expected utility (for whatever your utility function is) is the way you ensure optimal long-run outcomes; this remains true whether or not someone is constantly hanging around asking if you want 34% chance of $24 or 33% chance of $27.
But what this shows is that people do not necessarily have a single utility function for all circumstances. It’s possible for someone to prefer A to B to C to A in situations where any binary choice of those excludes the others from immediate possibility, and the only reason to disallow this, as far as I can see, is to try to force the territory to fit the map.
I’m not sure what you mean by disallow. As a purely descriptive matter about how actually existing humans actually are: I agree, people don’t have a single utility function for all circumstances; people don’t have utility functions at all! As a normative matter—well, I just interpret this as meaning that humans are fairly stupid on an absolute scale. If it turns out that our deepest hearts’ desires are contradictory when rigorously listed out, then this is a unspeakably horrible tragedy from our perspective—but what can I say? Something has to give; it’s not up to us.
It’s only a tragedy if it’s otherwise possible to get everything we want… but actually getting what we want is a tragedy for humans anyway, so that’s nothing worse. As for humans being stupid on an absolute scale, I don’t necessarily disagree, but I don’t think that examination of goals can tell you that. The only way to make a choice is by reference to a goal, so you can’t rationally choose your goal(s).
The money pump is but an illustration, not the one true definitive argument for the standard decision theory. For even if this particular Allais gamble isn’t repeated, you’re going to make many, many more decisions under uncertainty in your life (which job to take, what to study, where to live, &c.). Choosing the option with highest expected utility (for whatever your utility function is) is the way you ensure optimal long-run outcomes; this remains true whether or not someone is constantly hanging around asking if you want 34% chance of $24 or 33% chance of $27.
But what this shows is that people do not necessarily have a single utility function for all circumstances. It’s possible for someone to prefer A to B to C to A in situations where any binary choice of those excludes the others from immediate possibility, and the only reason to disallow this, as far as I can see, is to try to force the territory to fit the map.
I’m not sure what you mean by disallow. As a purely descriptive matter about how actually existing humans actually are: I agree, people don’t have a single utility function for all circumstances; people don’t have utility functions at all! As a normative matter—well, I just interpret this as meaning that humans are fairly stupid on an absolute scale. If it turns out that our deepest hearts’ desires are contradictory when rigorously listed out, then this is a unspeakably horrible tragedy from our perspective—but what can I say? Something has to give; it’s not up to us.
It’s only a tragedy if it’s otherwise possible to get everything we want… but actually getting what we want is a tragedy for humans anyway, so that’s nothing worse. As for humans being stupid on an absolute scale, I don’t necessarily disagree, but I don’t think that examination of goals can tell you that. The only way to make a choice is by reference to a goal, so you can’t rationally choose your goal(s).