This implies that people pick the moral frameworks which are best at justifying the ethical intuitions they already had.
The previous paragraph seemed to be arguing that people pick the moral frameworks which are best at describing the ethical intuitions they already had. Why do you choose this different interpretation?
I was previously puzzled over why so many smart people reject all forms of utilitarianism, as ultimately everyone has to perform some sort of expected utility calculations in order to make moral decisions at all
I don’t see the necessity. Can you expand on that?
With enough practice, our intuitions may be shifted towards the consciously held stance, which may be a good or bad thing.
Quite. Changing the theory to fit the data seems to me preferable to the reverse.
I don’t see the necessity. Can you expand on that?
I think you’re right not to see it. Valuing happiness is a relatively recent development in human thought. Much of ethics prior to the enlightenment dealt more with duties and following rules. In fact, seeking pleasure or happiness (particularly from food, sex, etc.) was generally looked down or actively disapproved. People may generally do what they calculate to be best, but best need not mean maximizing anything related to happiness.
Ultra-orthodox adherence to religion is probably the most obvious example of this principle, particularly Judaism, since there’s no infinitely-good-heaven to obfuscate the matter. You don’t follow the rules because they’ll make you or others happy, you follow them because you believe it’s the right thing to do.
Valuing happiness is a relatively recent development in human thought. Much of ethics prior to the enlightenment dealt more with duties and following rules.
This just isn’t true at all. Duty based morality was mostly a Kantian invention. Kant was a contemporary of Bentham’s and died a few years before Mill was born. Pre-enlightenment ethics was dominated by Aristotelian virtue theory which put happiness in a really important position (it might be wrong to consider happiness the reason for acting virtuous but it is certainly coincident with Eudamonia).
Edit to say I’m interpreting ethics as “the study of morality” if you mean by ethics the actual rules governing practices of people throughout the world your comment makes more sense. For most people throughout history (maybe including right now) doing what is right means doing what someone tells you to do. Considered that way your comment makes more sense.
The Ancient Greek concept of happiness was significantly different from the modern concept of happiness. It tended to be rather teleological and prescriptive rather than being individualistic. You achieved “true happiness” because you lived correctly; the correct way to live was not defined based on the happiness you got out of it. There were some philosophers who touched on beliefs closer to utilitarianism, but it was never close to main stream. Epicurus, for example, but his concept was a long way from Bentham’s utilitarianism. The idea that happiness was the ultimate human good and that more total happiness was unequivocally and absolutely better was not even close to a mainstream concept until the enlightenment.
Oh, some of Socrates’ fake debate opponents did argue pleasure as the ultimate good. This was generally answered with the argument that true pleasure would require certain things, so that the pursuit of pure pleasure actually didn’t give one the most amount of pleasure. This concept has objectionable objective, teleological, and non-falsifiable properties; it is a very long way from the utilitarian advocacy of the pursuit of pleasure, because its definition of pleasure is so constrained.
Much of ethics prior to the enlightenment dealt more with duties and following rules.
Virtue ethics was generally about following rules. Duty was not the primary motivator, but if you did not do things you were obliged to do, like obey your liege, your father, the church, etc., you were not virtuous. Most of society was, and in many ways still is, run by people slavishly adhering to social customs irrespective of their individual or collective utilitarian value.
I did not claim that everyone operated explicitly off of a Kantian belief that duty was the ultimate good. I am simply pointing out that most people’s ethical systems were, in practice, simply based on obeying those society said should be obeyed and following rules society said they should follow. I don’t think this is particularly controversial, and that people can operate off of such systems shows that one need not be utilitarian to make moral judgements.
As I added in my edit, I find it plausible (though not certainly the case) that the ethical systems of individuals have often amounted to merely obeying social rules. Indeed, for the most part they continue to do so. I don’t think we disagree.
That said, as far as the scholarly examination of morality goes there wasn’t any kind of paradigm shift away from duty-based theories to “happiness” based theories. Either theories that dealt with duty and following rules means something like Kantian ethics or Divine Rule in which case the Enlightenment saw an increase in such theories OR duty-based theory just refers to any theory which generates rules and duties in which case utilitarianism is just as much a duty-based theory as anything else (as it entails a duty to maximize utility).
Virtue ethics “was generally about following rules” only in this second sense. Obviously virtue ethics dealt with happiness in a different way then utilitarianism, since, you know, they’re not the same thing. I agree that the word that Ancient Greek word that gets translated as happiness in the Nicomachean Ethics means something different from what we mean by happiness. I like “flourishing”. But it certainly includes happiness and is far more central (for Aristotle Eudamonia is the purpose of your existence) to virtue ethics than duty is.
Bentham and Mill were definitely innovators, I’m not disputing that. But I think their innovation had more to do with their consequentialism than their hedonism. What seems crucially new, to me, is that actions are evaluated exclusively by the effect they have on the world. Previous ethical theories are theories for the powerless. When you don’t know how you the effect the world it doesn’t make any sense to judge actions by the effect. The scientific revolution, and in particular the resulting British empiricism were crucial for making this sort of innovation possible.
Its also true that certain kinds of pleasure came to be looked down upon less than they were looked down upon before but I think this has less to do with the theoretical. innovations of utilitarianism then with economic and social changes leading to changes in what counts as virtue which Hume noted. After all, Mill felt the need to distinguish between higher (art, friendship, etc.) and lower pleasures (sex, food, drink) the former of which couldn’t be traded for any amount of the lower and were vast more valuable.
Anyway, I definitely agree that you don’t have to be a utilitarian to make moral judgments. I was just replying to the notion that pre-utilitarian theories were best understood as being A) About duty and B) Not about happiness.
My reading of that sentence was that Kaj_Sotala focused not on the happiness part of utilitarianism, but on the expected utility calculation part. That is, that everyone needs to make an expected utility calculation to make moral decisions. I don’t think any particular type of utility was meant to be implied as necessary.
The previous paragraph seemed to be arguing that people pick the moral frameworks which are best at describing the ethical intuitions they already had. Why do you choose this different interpretation?
Ah, you’re right, I left out a few inferential steps. The important point is that over time, the frameworks take on a moral importance of their own—they cease to become mere models, instead becoming axioms. (More about this in my addendum.) That also makes the meaning of “models that best explain intuitions” and “models that best justify intuitions” blend together, especially since a consistent ethical framework is also good for your external image.
I don’t see the necessity. Can you expand on that?
To put it briefly: by “all forms of utilitarianism”, I wasn’t referring to the classical meaning of utilitarianism as maximizing the happiness of everyone, but instead the meaning it seems to have taken in common parlance: any theory where decisions are made by maximizing expected total utility. Nobody (that I know of) has principles that are entirely absolute: they are always weighted against other principles and possible consequences, implying that they must have different weightings that are compared to find the combination that produces the best result (interpretable as the one that produces the highest utility). I suppose you could reject this and say that people just have this insanely huge preference ordering for different outcomes, but that sounds more than a bit implausible. (Not to mention that you can construct a utility function for any given preference ordering, anyway.) Of course, it looks politically better to claim that your principles are absolute and not subject to negotiation, so people want to instinctively reject any such thoughts.
Nobody (that I know of) has principles that are entirely absolute: they are always weighted against other principles and possible consequences, implying that they must have different weightings that are compared to find the combination that produces the best result (interpretable as the one that produces the highest utility). I suppose you could reject this and say that people just have this insanely huge preference ordering for different outcomes, but that sounds more than a bit implausible. (Not to mention that you can construct a utility function for any given preference ordering, anyway.)
I reject both it, and the straw alternative you offer. I see no reason to believe that people have utility functions, that people have global preferences satisfying the requirements of the utility function theorem, or that people have global preferences at all. People do not make decisions by weighing up the “utility” of all the alternatives and choosing the maximum. That’s an introspective fairy tale. You can ask people to compare any two things you like, but there’s no guarantee that the answers will mean anything. If you get cyclic answers, you haven’t found a money pump unless the alternatives are ones you can actually offer.
You might as well ask Feathers or lead? Whatever answer you get will be wrong.
Observing that people prefer some things to others and arriving at utility functions as the normative standard of rationality looks rather similar to the process you described of going from moral intuitions to attaching moral value to generalisations about them.
Whether an ideal rational agent would have a global utility function is a separate question. You can make it true by definition, but that just moves the question: why would one aspire to be such an agent? And what would one’s global utility function be? Defining them as “autonomous programs that are capable of goal directed behavior” (from the same Wiki article) severs the connection with utility functions. You can put it back in: “a rational agent should select an action that is expected to maximize its performance measure” (Russell & Norvig), but that leaves the problem of defining its performance measure. However you slide these blocks around, they never fill the hole.
Huh. Reading this comment again, I realize I’ve shifted considerably closer to your view, while forgetting that we ever had this discussion in the first place.
Having non-global or circular preferences doesn’t mean a utility function doesn’t exist—it just means it’s far more complex.
Can you expand on that? I can’t find any description on the web of utility functions that aren’t intimately bound to global preferences. Well-behaved global preferences give you utility functions by the Utility Theorem; utility functions directly give you global preferences.
Someone recently remarked (in a comment I haven’t been able to find again) that circular preferences really mean a preference for running around in circles, but this is a redefinition of “preference”. A preference is what you were observing when you presented someone with pairs of alternatives and asked them to choose one from each. If, on eliciting a cyclic set of preferences, you ask them whether they prefer running around in circles or not, and they say not, then there you are, they’ve told you another preference. Are you going to then say they have a preference for contradicting themselves?
wasn’t referring to the classical meaning of utilitarianism as maximizing the happiness of everyone, but instead the meaning it seems to have taken in common parlance: any theory where decisions are made by maximizing expected total utility.
I don’t think that’s the common usage. Maybe the same etymology means that any difference must erode, but I think it’s worth fighting. A related distinction I think is important is consequentialism vs utilitarianism. I think that the modern meaning of consequentialism is using “good” purely in an ordinal sense and purely based on consequences, but I’m not sure what Anscombe meant. Decision theory says that coherent consequentialism is equivalent to maximizing a utility function.
The previous paragraph seemed to be arguing that people pick the moral frameworks which are best at describing the ethical intuitions they already had. Why do you choose this different interpretation?
I don’t see the necessity. Can you expand on that?
Quite. Changing the theory to fit the data seems to me preferable to the reverse.
I think you’re right not to see it. Valuing happiness is a relatively recent development in human thought. Much of ethics prior to the enlightenment dealt more with duties and following rules. In fact, seeking pleasure or happiness (particularly from food, sex, etc.) was generally looked down or actively disapproved. People may generally do what they calculate to be best, but best need not mean maximizing anything related to happiness.
Ultra-orthodox adherence to religion is probably the most obvious example of this principle, particularly Judaism, since there’s no infinitely-good-heaven to obfuscate the matter. You don’t follow the rules because they’ll make you or others happy, you follow them because you believe it’s the right thing to do.
This just isn’t true at all. Duty based morality was mostly a Kantian invention. Kant was a contemporary of Bentham’s and died a few years before Mill was born. Pre-enlightenment ethics was dominated by Aristotelian virtue theory which put happiness in a really important position (it might be wrong to consider happiness the reason for acting virtuous but it is certainly coincident with Eudamonia).
Edit to say I’m interpreting ethics as “the study of morality” if you mean by ethics the actual rules governing practices of people throughout the world your comment makes more sense. For most people throughout history (maybe including right now) doing what is right means doing what someone tells you to do. Considered that way your comment makes more sense.
The Ancient Greek concept of happiness was significantly different from the modern concept of happiness. It tended to be rather teleological and prescriptive rather than being individualistic. You achieved “true happiness” because you lived correctly; the correct way to live was not defined based on the happiness you got out of it. There were some philosophers who touched on beliefs closer to utilitarianism, but it was never close to main stream. Epicurus, for example, but his concept was a long way from Bentham’s utilitarianism. The idea that happiness was the ultimate human good and that more total happiness was unequivocally and absolutely better was not even close to a mainstream concept until the enlightenment.
Oh, some of Socrates’ fake debate opponents did argue pleasure as the ultimate good. This was generally answered with the argument that true pleasure would require certain things, so that the pursuit of pure pleasure actually didn’t give one the most amount of pleasure. This concept has objectionable objective, teleological, and non-falsifiable properties; it is a very long way from the utilitarian advocacy of the pursuit of pleasure, because its definition of pleasure is so constrained.
Virtue ethics was generally about following rules. Duty was not the primary motivator, but if you did not do things you were obliged to do, like obey your liege, your father, the church, etc., you were not virtuous. Most of society was, and in many ways still is, run by people slavishly adhering to social customs irrespective of their individual or collective utilitarian value.
I did not claim that everyone operated explicitly off of a Kantian belief that duty was the ultimate good. I am simply pointing out that most people’s ethical systems were, in practice, simply based on obeying those society said should be obeyed and following rules society said they should follow. I don’t think this is particularly controversial, and that people can operate off of such systems shows that one need not be utilitarian to make moral judgements.
As I added in my edit, I find it plausible (though not certainly the case) that the ethical systems of individuals have often amounted to merely obeying social rules. Indeed, for the most part they continue to do so. I don’t think we disagree.
That said, as far as the scholarly examination of morality goes there wasn’t any kind of paradigm shift away from duty-based theories to “happiness” based theories. Either theories that dealt with duty and following rules means something like Kantian ethics or Divine Rule in which case the Enlightenment saw an increase in such theories OR duty-based theory just refers to any theory which generates rules and duties in which case utilitarianism is just as much a duty-based theory as anything else (as it entails a duty to maximize utility).
Virtue ethics “was generally about following rules” only in this second sense. Obviously virtue ethics dealt with happiness in a different way then utilitarianism, since, you know, they’re not the same thing. I agree that the word that Ancient Greek word that gets translated as happiness in the Nicomachean Ethics means something different from what we mean by happiness. I like “flourishing”. But it certainly includes happiness and is far more central (for Aristotle Eudamonia is the purpose of your existence) to virtue ethics than duty is.
Bentham and Mill were definitely innovators, I’m not disputing that. But I think their innovation had more to do with their consequentialism than their hedonism. What seems crucially new, to me, is that actions are evaluated exclusively by the effect they have on the world. Previous ethical theories are theories for the powerless. When you don’t know how you the effect the world it doesn’t make any sense to judge actions by the effect. The scientific revolution, and in particular the resulting British empiricism were crucial for making this sort of innovation possible.
Its also true that certain kinds of pleasure came to be looked down upon less than they were looked down upon before but I think this has less to do with the theoretical. innovations of utilitarianism then with economic and social changes leading to changes in what counts as virtue which Hume noted. After all, Mill felt the need to distinguish between higher (art, friendship, etc.) and lower pleasures (sex, food, drink) the former of which couldn’t be traded for any amount of the lower and were vast more valuable.
Anyway, I definitely agree that you don’t have to be a utilitarian to make moral judgments. I was just replying to the notion that pre-utilitarian theories were best understood as being A) About duty and B) Not about happiness.
My reading of that sentence was that Kaj_Sotala focused not on the happiness part of utilitarianism, but on the expected utility calculation part. That is, that everyone needs to make an expected utility calculation to make moral decisions. I don’t think any particular type of utility was meant to be implied as necessary.
Well, there was Epicurus...
Ah, you’re right, I left out a few inferential steps. The important point is that over time, the frameworks take on a moral importance of their own—they cease to become mere models, instead becoming axioms. (More about this in my addendum.) That also makes the meaning of “models that best explain intuitions” and “models that best justify intuitions” blend together, especially since a consistent ethical framework is also good for your external image.
To put it briefly: by “all forms of utilitarianism”, I wasn’t referring to the classical meaning of utilitarianism as maximizing the happiness of everyone, but instead the meaning it seems to have taken in common parlance: any theory where decisions are made by maximizing expected total utility. Nobody (that I know of) has principles that are entirely absolute: they are always weighted against other principles and possible consequences, implying that they must have different weightings that are compared to find the combination that produces the best result (interpretable as the one that produces the highest utility). I suppose you could reject this and say that people just have this insanely huge preference ordering for different outcomes, but that sounds more than a bit implausible. (Not to mention that you can construct a utility function for any given preference ordering, anyway.) Of course, it looks politically better to claim that your principles are absolute and not subject to negotiation, so people want to instinctively reject any such thoughts.
I reject both it, and the straw alternative you offer. I see no reason to believe that people have utility functions, that people have global preferences satisfying the requirements of the utility function theorem, or that people have global preferences at all. People do not make decisions by weighing up the “utility” of all the alternatives and choosing the maximum. That’s an introspective fairy tale. You can ask people to compare any two things you like, but there’s no guarantee that the answers will mean anything. If you get cyclic answers, you haven’t found a money pump unless the alternatives are ones you can actually offer.
An Etruscan column or Bach’s cantata 148?
Three badgers or half a pallet of bricks? (One brick? A whole pallet?)
You might as well ask Feathers or lead? Whatever answer you get will be wrong.
Observing that people prefer some things to others and arriving at utility functions as the normative standard of rationality looks rather similar to the process you described of going from moral intuitions to attaching moral value to generalisations about them.
Whether an ideal rational agent would have a global utility function is a separate question. You can make it true by definition, but that just moves the question: why would one aspire to be such an agent? And what would one’s global utility function be? Defining them as “autonomous programs that are capable of goal directed behavior” (from the same Wiki article) severs the connection with utility functions. You can put it back in: “a rational agent should select an action that is expected to maximize its performance measure” (Russell & Norvig), but that leaves the problem of defining its performance measure. However you slide these blocks around, they never fill the hole.
Huh. Reading this comment again, I realize I’ve shifted considerably closer to your view, while forgetting that we ever had this discussion in the first place.
Having non-global or circular preferences doesn’t mean a utility function doesn’t exist—it just means it’s far more complex.
Can you expand on that? I can’t find any description on the web of utility functions that aren’t intimately bound to global preferences. Well-behaved global preferences give you utility functions by the Utility Theorem; utility functions directly give you global preferences.
Someone recently remarked (in a comment I haven’t been able to find again) that circular preferences really mean a preference for running around in circles, but this is a redefinition of “preference”. A preference is what you were observing when you presented someone with pairs of alternatives and asked them to choose one from each. If, on eliciting a cyclic set of preferences, you ask them whether they prefer running around in circles or not, and they say not, then there you are, they’ve told you another preference. Are you going to then say they have a preference for contradicting themselves?
I don’t think that’s the common usage. Maybe the same etymology means that any difference must erode, but I think it’s worth fighting. A related distinction I think is important is consequentialism vs utilitarianism. I think that the modern meaning of consequentialism is using “good” purely in an ordinal sense and purely based on consequences, but I’m not sure what Anscombe meant. Decision theory says that coherent consequentialism is equivalent to maximizing a utility function.