The previous paragraph seemed to be arguing that people pick the moral frameworks which are best at describing the ethical intuitions they already had. Why do you choose this different interpretation?
Ah, you’re right, I left out a few inferential steps. The important point is that over time, the frameworks take on a moral importance of their own—they cease to become mere models, instead becoming axioms. (More about this in my addendum.) That also makes the meaning of “models that best explain intuitions” and “models that best justify intuitions” blend together, especially since a consistent ethical framework is also good for your external image.
I don’t see the necessity. Can you expand on that?
To put it briefly: by “all forms of utilitarianism”, I wasn’t referring to the classical meaning of utilitarianism as maximizing the happiness of everyone, but instead the meaning it seems to have taken in common parlance: any theory where decisions are made by maximizing expected total utility. Nobody (that I know of) has principles that are entirely absolute: they are always weighted against other principles and possible consequences, implying that they must have different weightings that are compared to find the combination that produces the best result (interpretable as the one that produces the highest utility). I suppose you could reject this and say that people just have this insanely huge preference ordering for different outcomes, but that sounds more than a bit implausible. (Not to mention that you can construct a utility function for any given preference ordering, anyway.) Of course, it looks politically better to claim that your principles are absolute and not subject to negotiation, so people want to instinctively reject any such thoughts.
Nobody (that I know of) has principles that are entirely absolute: they are always weighted against other principles and possible consequences, implying that they must have different weightings that are compared to find the combination that produces the best result (interpretable as the one that produces the highest utility). I suppose you could reject this and say that people just have this insanely huge preference ordering for different outcomes, but that sounds more than a bit implausible. (Not to mention that you can construct a utility function for any given preference ordering, anyway.)
I reject both it, and the straw alternative you offer. I see no reason to believe that people have utility functions, that people have global preferences satisfying the requirements of the utility function theorem, or that people have global preferences at all. People do not make decisions by weighing up the “utility” of all the alternatives and choosing the maximum. That’s an introspective fairy tale. You can ask people to compare any two things you like, but there’s no guarantee that the answers will mean anything. If you get cyclic answers, you haven’t found a money pump unless the alternatives are ones you can actually offer.
You might as well ask Feathers or lead? Whatever answer you get will be wrong.
Observing that people prefer some things to others and arriving at utility functions as the normative standard of rationality looks rather similar to the process you described of going from moral intuitions to attaching moral value to generalisations about them.
Whether an ideal rational agent would have a global utility function is a separate question. You can make it true by definition, but that just moves the question: why would one aspire to be such an agent? And what would one’s global utility function be? Defining them as “autonomous programs that are capable of goal directed behavior” (from the same Wiki article) severs the connection with utility functions. You can put it back in: “a rational agent should select an action that is expected to maximize its performance measure” (Russell & Norvig), but that leaves the problem of defining its performance measure. However you slide these blocks around, they never fill the hole.
Huh. Reading this comment again, I realize I’ve shifted considerably closer to your view, while forgetting that we ever had this discussion in the first place.
Having non-global or circular preferences doesn’t mean a utility function doesn’t exist—it just means it’s far more complex.
Can you expand on that? I can’t find any description on the web of utility functions that aren’t intimately bound to global preferences. Well-behaved global preferences give you utility functions by the Utility Theorem; utility functions directly give you global preferences.
Someone recently remarked (in a comment I haven’t been able to find again) that circular preferences really mean a preference for running around in circles, but this is a redefinition of “preference”. A preference is what you were observing when you presented someone with pairs of alternatives and asked them to choose one from each. If, on eliciting a cyclic set of preferences, you ask them whether they prefer running around in circles or not, and they say not, then there you are, they’ve told you another preference. Are you going to then say they have a preference for contradicting themselves?
wasn’t referring to the classical meaning of utilitarianism as maximizing the happiness of everyone, but instead the meaning it seems to have taken in common parlance: any theory where decisions are made by maximizing expected total utility.
I don’t think that’s the common usage. Maybe the same etymology means that any difference must erode, but I think it’s worth fighting. A related distinction I think is important is consequentialism vs utilitarianism. I think that the modern meaning of consequentialism is using “good” purely in an ordinal sense and purely based on consequences, but I’m not sure what Anscombe meant. Decision theory says that coherent consequentialism is equivalent to maximizing a utility function.
Ah, you’re right, I left out a few inferential steps. The important point is that over time, the frameworks take on a moral importance of their own—they cease to become mere models, instead becoming axioms. (More about this in my addendum.) That also makes the meaning of “models that best explain intuitions” and “models that best justify intuitions” blend together, especially since a consistent ethical framework is also good for your external image.
To put it briefly: by “all forms of utilitarianism”, I wasn’t referring to the classical meaning of utilitarianism as maximizing the happiness of everyone, but instead the meaning it seems to have taken in common parlance: any theory where decisions are made by maximizing expected total utility. Nobody (that I know of) has principles that are entirely absolute: they are always weighted against other principles and possible consequences, implying that they must have different weightings that are compared to find the combination that produces the best result (interpretable as the one that produces the highest utility). I suppose you could reject this and say that people just have this insanely huge preference ordering for different outcomes, but that sounds more than a bit implausible. (Not to mention that you can construct a utility function for any given preference ordering, anyway.) Of course, it looks politically better to claim that your principles are absolute and not subject to negotiation, so people want to instinctively reject any such thoughts.
I reject both it, and the straw alternative you offer. I see no reason to believe that people have utility functions, that people have global preferences satisfying the requirements of the utility function theorem, or that people have global preferences at all. People do not make decisions by weighing up the “utility” of all the alternatives and choosing the maximum. That’s an introspective fairy tale. You can ask people to compare any two things you like, but there’s no guarantee that the answers will mean anything. If you get cyclic answers, you haven’t found a money pump unless the alternatives are ones you can actually offer.
An Etruscan column or Bach’s cantata 148?
Three badgers or half a pallet of bricks? (One brick? A whole pallet?)
You might as well ask Feathers or lead? Whatever answer you get will be wrong.
Observing that people prefer some things to others and arriving at utility functions as the normative standard of rationality looks rather similar to the process you described of going from moral intuitions to attaching moral value to generalisations about them.
Whether an ideal rational agent would have a global utility function is a separate question. You can make it true by definition, but that just moves the question: why would one aspire to be such an agent? And what would one’s global utility function be? Defining them as “autonomous programs that are capable of goal directed behavior” (from the same Wiki article) severs the connection with utility functions. You can put it back in: “a rational agent should select an action that is expected to maximize its performance measure” (Russell & Norvig), but that leaves the problem of defining its performance measure. However you slide these blocks around, they never fill the hole.
Huh. Reading this comment again, I realize I’ve shifted considerably closer to your view, while forgetting that we ever had this discussion in the first place.
Having non-global or circular preferences doesn’t mean a utility function doesn’t exist—it just means it’s far more complex.
Can you expand on that? I can’t find any description on the web of utility functions that aren’t intimately bound to global preferences. Well-behaved global preferences give you utility functions by the Utility Theorem; utility functions directly give you global preferences.
Someone recently remarked (in a comment I haven’t been able to find again) that circular preferences really mean a preference for running around in circles, but this is a redefinition of “preference”. A preference is what you were observing when you presented someone with pairs of alternatives and asked them to choose one from each. If, on eliciting a cyclic set of preferences, you ask them whether they prefer running around in circles or not, and they say not, then there you are, they’ve told you another preference. Are you going to then say they have a preference for contradicting themselves?
I don’t think that’s the common usage. Maybe the same etymology means that any difference must erode, but I think it’s worth fighting. A related distinction I think is important is consequentialism vs utilitarianism. I think that the modern meaning of consequentialism is using “good” purely in an ordinal sense and purely based on consequences, but I’m not sure what Anscombe meant. Decision theory says that coherent consequentialism is equivalent to maximizing a utility function.