Nobody (that I know of) has principles that are entirely absolute: they are always weighted against other principles and possible consequences, implying that they must have different weightings that are compared to find the combination that produces the best result (interpretable as the one that produces the highest utility). I suppose you could reject this and say that people just have this insanely huge preference ordering for different outcomes, but that sounds more than a bit implausible. (Not to mention that you can construct a utility function for any given preference ordering, anyway.)
I reject both it, and the straw alternative you offer. I see no reason to believe that people have utility functions, that people have global preferences satisfying the requirements of the utility function theorem, or that people have global preferences at all. People do not make decisions by weighing up the “utility” of all the alternatives and choosing the maximum. That’s an introspective fairy tale. You can ask people to compare any two things you like, but there’s no guarantee that the answers will mean anything. If you get cyclic answers, you haven’t found a money pump unless the alternatives are ones you can actually offer.
You might as well ask Feathers or lead? Whatever answer you get will be wrong.
Observing that people prefer some things to others and arriving at utility functions as the normative standard of rationality looks rather similar to the process you described of going from moral intuitions to attaching moral value to generalisations about them.
Whether an ideal rational agent would have a global utility function is a separate question. You can make it true by definition, but that just moves the question: why would one aspire to be such an agent? And what would one’s global utility function be? Defining them as “autonomous programs that are capable of goal directed behavior” (from the same Wiki article) severs the connection with utility functions. You can put it back in: “a rational agent should select an action that is expected to maximize its performance measure” (Russell & Norvig), but that leaves the problem of defining its performance measure. However you slide these blocks around, they never fill the hole.
Huh. Reading this comment again, I realize I’ve shifted considerably closer to your view, while forgetting that we ever had this discussion in the first place.
Having non-global or circular preferences doesn’t mean a utility function doesn’t exist—it just means it’s far more complex.
Can you expand on that? I can’t find any description on the web of utility functions that aren’t intimately bound to global preferences. Well-behaved global preferences give you utility functions by the Utility Theorem; utility functions directly give you global preferences.
Someone recently remarked (in a comment I haven’t been able to find again) that circular preferences really mean a preference for running around in circles, but this is a redefinition of “preference”. A preference is what you were observing when you presented someone with pairs of alternatives and asked them to choose one from each. If, on eliciting a cyclic set of preferences, you ask them whether they prefer running around in circles or not, and they say not, then there you are, they’ve told you another preference. Are you going to then say they have a preference for contradicting themselves?
I reject both it, and the straw alternative you offer. I see no reason to believe that people have utility functions, that people have global preferences satisfying the requirements of the utility function theorem, or that people have global preferences at all. People do not make decisions by weighing up the “utility” of all the alternatives and choosing the maximum. That’s an introspective fairy tale. You can ask people to compare any two things you like, but there’s no guarantee that the answers will mean anything. If you get cyclic answers, you haven’t found a money pump unless the alternatives are ones you can actually offer.
An Etruscan column or Bach’s cantata 148?
Three badgers or half a pallet of bricks? (One brick? A whole pallet?)
You might as well ask Feathers or lead? Whatever answer you get will be wrong.
Observing that people prefer some things to others and arriving at utility functions as the normative standard of rationality looks rather similar to the process you described of going from moral intuitions to attaching moral value to generalisations about them.
Whether an ideal rational agent would have a global utility function is a separate question. You can make it true by definition, but that just moves the question: why would one aspire to be such an agent? And what would one’s global utility function be? Defining them as “autonomous programs that are capable of goal directed behavior” (from the same Wiki article) severs the connection with utility functions. You can put it back in: “a rational agent should select an action that is expected to maximize its performance measure” (Russell & Norvig), but that leaves the problem of defining its performance measure. However you slide these blocks around, they never fill the hole.
Huh. Reading this comment again, I realize I’ve shifted considerably closer to your view, while forgetting that we ever had this discussion in the first place.
Having non-global or circular preferences doesn’t mean a utility function doesn’t exist—it just means it’s far more complex.
Can you expand on that? I can’t find any description on the web of utility functions that aren’t intimately bound to global preferences. Well-behaved global preferences give you utility functions by the Utility Theorem; utility functions directly give you global preferences.
Someone recently remarked (in a comment I haven’t been able to find again) that circular preferences really mean a preference for running around in circles, but this is a redefinition of “preference”. A preference is what you were observing when you presented someone with pairs of alternatives and asked them to choose one from each. If, on eliciting a cyclic set of preferences, you ask them whether they prefer running around in circles or not, and they say not, then there you are, they’ve told you another preference. Are you going to then say they have a preference for contradicting themselves?