I fear you may be thinking “serial killer: karma −937; my sister: karma +2764”.
A utilitarian would say: consider what that person is likely to do in the future. The serial killer might murder dozens more people, or might get caught and rot in jail. Your sister will most likely do neither. And consider how other people will feel about the deaths. The serial killer is likely to have more enemies, fewer friends, fewer close friends. So the next utility change from shooting the serial killer is much less negative (or even more positive) than from shooting your sister, and you need not (should not) be indifferent between those.
In general, utilitarianism gets results that resemble those of intuitive morality, but it tends to get them indirectly. Or perhaps it would be better to say: Intuitive morality gets results that resemble those of utilitarianism, but it gets them via short-cuts and heuristics, so that things that tend to do badly in utilitarian terms feel like they’re labelled “bad”.
In a least convenient possible world, where the serial killer really enjoys killing people and only kills people who have no friends and family and won’t be missed and are quite depressed, would it ever be conceivable that utilitarianism would imply indifference to the choice?
It’s certainly possible in principle that it might end up that way. A utilitarian would say: Our moral intuitions are formed by our experience of “normal” situations; in situations as weirdly abnormal as you’d need to make utilitarianism favour saving the serial killer at the expense of an ordinary upright citizen, or to make slavery a good thing overall, or whatever, we shouldn’t trust our intuition.
And this is the crux of my problem with utilitarianism I guess. I just don’t see any good reason to prefer it over my intuition when the two are in conflict.
Even though your intuition might be wrong in outlying cases, it’s still a better use of your resources not to think through every case, so I’d agree that using your intuition is better than using reasoned utilitarianism for most decisions for most people.
It’s better to strictly adhere to an almost-right moral system than to spend significant resources on working out arbitrarily-close-to-right moral solutions, for sufficiently high values of “almost-right”, in other words. In addition to the inherent efficiency benefit, this will make you more predictable to others, lowering your transaction costs in interactions with them.
My problem is a bit more fundamental than that. If the premise of utilitarianism is that it is morally/ethically right for me to provide equal weighting to all people’s utility in my own utility function then I dispute the premise, not the procedure for working out the correct thing to do given the premise. The fact that utilitarianism can lead to moral/ethical decisions that conflict with my intuitions seems to me a reason to question the premises of utilitarianism rather than to question my intuitions.
Your intuitions will be biased to favoring a sibling over a stranger. Evolution has seen to that, i.e. kin selection.
Utilitarianism tries to maximize utility for all, regardless of relatedness. Even if you adjust the weightings for individuals based on likelihood of particular individuals having a greater impact on overall utility, you don’t (in general) get weightings that will match your intuitions.
I think it is unreasonable to expect your moral intuitions to ever approximate utilitarianism (or vice versa) unless you are making moral decisions about people you don’t know at all.
In reality, the money I spend on my two cats could be spent improving the happiness of many humans—humans that I don’t know at all who are living a long way away from me. Clearly I don’t apply utilitarianism to my moral decision to keep pets. I am still confused about how much I should let utilitarianism shift my emotionally-based lifestyle decisions.
I think you are construing the term “utilitarianism” too narrowly. The only reason you should be a utilitarian is if you intrinsically value the utility functions of other people. However, you don’t have to value the entire thing for the label to be appropriate. You still care about a large part of that murderer’s utility function, I assume, as well as that of non-murderers. Not classical utilitarianism, but the term still seems appropriate.
Utilitarianism seems a fairly unuseful ethical system if the utility function is subjective, either because individuals get to pick and choose which parts of others’ utility functions to respect or because individuals are allowed to choose subjective weights for others’ utilities. It would seem to degenerate into an impractical-to-implement system for everybody just justifying what they feel like doing anyway.
Well, assuming you get to make up your own utility function, yes. However, I don’t think this is the case. It seems more likely that we or born with utility functions or, rather, something we can construct a coherent utility function out of. Given the psychological unity of mankind, there is likely to be a lot of similarities in these utility functions across the species.
Didn’t you just suggest that we don’t have to value the entirety of a murderer’s utility function? There are certainly similarities between individual’s utility functions but they are not identical. That still doesn’t address the differential weighting issue either. It’s fairly clear that most people do in fact put greater weight on the utility of their family and friends than on that of strangers. I believe that is perfectly ethical and moral but it conflicts with a conception of utilitarianism that requires equal weights for all humans. If weights are not equal then utility is not universal and so utilitarianism does not provide a unique ‘right’ answer in the face of any ethical dilemma and so seems to me to be of limited value.
It’s fairly clear that most people do in fact put greater weight on the utility of their family and friends than on that of strangers. I believe that is perfectly ethical and moral but it conflicts with a conception of utilitarianism that requires equal weights for all humans. If weights are not equal then utility is not universal and so utilitarianism does not provide a unique ‘right’ answer in the face of any ethical dilemma and so seems to me to be of limited value.
However, I agree with you that any form of utilitarianism that has to have different weights when applied by different people is highly problematic. So we’re left with:
Pure selfless utilitarianism conflicts with our natural intuitions about morality when our friends and relatives are involved.
Untrained intuitive morality results in favoring humans unequally based on relationships and will appear unfair from a 3rd party viewpoint.
You can train yourself to some extent to find a utilitarian position more intuitive. If you work with just about any consistent system for long enough, it’ll start to feel more natural. I doubt that anyone who has any social or familial connections can be a perfect utilitarian all the time: there are always times when family or friends take priority over the rest of the world.
If you choose to reject any system that doesn’t provide a “unique ‘right’ answer” then you’re going to reject every system so far devised.
It seems to me that utilitarianism is trying to answer the wrong question. I don’t think there’s anything inherently wrong with individuals simply trying their best to satisfy their own unique utility functions (which generally include some concern for the utility functions of others but not equal concern for all others). I see morality and ethics as to a large extent not theoretical questions about what is ‘right’ but as empirical questions about what moral and ethical decision processes produce an evolutionarily stable strategy for co-existing with other agents with different goals.
On my view of morality it’s accepted that different agents will have different utilities for different outcomes and that there is not in general one outcome which all agents will agree is optimal. Morality is then the problem of developing a framework for resolving conflicts of interest in such a way that all the agents can accept the conflict resolution process as optimal. It is not a problem of achieving an outcome that all agents can agree is optimal. For humans, biological and cultural evolution have equipped us with a set of rules and heuristics for the resolution of conflicts of interest that have worked well enough to get us to where we are today. My interest in morality/ethics is in improving the process, not in some mythical quest for what is ‘right’.
Have you read Greene’s The Terrible, Horrible, No Good, Very Bad Truth about Morality and What to Do About it ?
I haven’t, but I’ve seen it mentioned before so I should check it out at some point. To be honest the title put me off when I first saw it linked because it makes it sound like it’s aimed at someone who still holds the naive view of morality that it’s about doing what is ‘right’.
Morality is then the problem of developing a framework for resolving conflicts of interest in such a way that all the agents can accept the conflict resolution process as optimal.
I think we’re in agreement here.
For me the difficult questions arise when we try to take one universalizable moral principle and try to apply it at every level of organization, from the personal “what should I be doing with my time and energy at this moment?” to the public “what should person A be permitted/obliged to do?”
I was thinking about raising the question of utilitarianism being difficult to institute as an ESS when writing my previous post. To a certain extent, we (in democratic cultures with independent judiciary) train our intuitions to accept the idea of fairness as we grow up. Our parents, kindergarten and school teachers do their best to instill certain values. The fact that racism and sexism can become entrenched during formative years suggests to me that the equality and fairness principles I’ve grown up with can also be trained. We share a psychological architecture, but there is enough flexibility that we can train our moral intuitions (to some extent).
Utilitarianism is in principle universalizable, but is it practically universalizable at all decision levels? What training (or brainwashing) and threats of defector punishment would we need to implement to completely override our natural nepotism? To me this seems like an impractical goal.
I’ve been somewhat confused by the idea of anyone wanting to make all their decisions on utilitarian principles (even at the expense of familial obligations), so I wondered if I’ve been erecting an extreme utilitarian strawman. I think I have, and I’m seeing a glimmer of a solution to the confusion.
Given that we all have relationships we value, and to force ourselves to ignore those relationships in our daily activities represents negative utility, we cannot maximize utility with a moral system that requires everyone to treat everyone else as equal at all times and in all decisions. Any genuine utilitarian calculation must account for everyone’s emotional satisfaction from relationship activities.
(I feel less confused now. I’ll have to think about this some more.)
Yes. But if the “serial killer” is actually somone who enjoys helping others, who want to (and won’t harm anyone when they), commit suicide; are they really a bad person at all?
Is shooting them really better than shooting a random person?
In a least convenient possible world, where the serial killer really enjoys killing people and only kills people who have no friends and family and won’t be missed and are quite depressed, would it ever be conceivable that utilitarianism would imply indifference to the choice?
Also, would the verdict on this question change if the people he killed had attempted but failed at suicide, or wanted to suicide but lacked the willpower to?
I fear you may be thinking “serial killer: karma −937; my sister: karma +2764”.
A utilitarian would say: consider what that person is likely to do in the future. The serial killer might murder dozens more people, or might get caught and rot in jail. Your sister will most likely do neither. And consider how other people will feel about the deaths. The serial killer is likely to have more enemies, fewer friends, fewer close friends. So the next utility change from shooting the serial killer is much less negative (or even more positive) than from shooting your sister, and you need not (should not) be indifferent between those.
In general, utilitarianism gets results that resemble those of intuitive morality, but it tends to get them indirectly. Or perhaps it would be better to say: Intuitive morality gets results that resemble those of utilitarianism, but it gets them via short-cuts and heuristics, so that things that tend to do badly in utilitarian terms feel like they’re labelled “bad”.
In a least convenient possible world, where the serial killer really enjoys killing people and only kills people who have no friends and family and won’t be missed and are quite depressed, would it ever be conceivable that utilitarianism would imply indifference to the choice?
It’s certainly possible in principle that it might end up that way. A utilitarian would say: Our moral intuitions are formed by our experience of “normal” situations; in situations as weirdly abnormal as you’d need to make utilitarianism favour saving the serial killer at the expense of an ordinary upright citizen, or to make slavery a good thing overall, or whatever, we shouldn’t trust our intuition.
And this is the crux of my problem with utilitarianism I guess. I just don’t see any good reason to prefer it over my intuition when the two are in conflict.
Even though your intuition might be wrong in outlying cases, it’s still a better use of your resources not to think through every case, so I’d agree that using your intuition is better than using reasoned utilitarianism for most decisions for most people.
It’s better to strictly adhere to an almost-right moral system than to spend significant resources on working out arbitrarily-close-to-right moral solutions, for sufficiently high values of “almost-right”, in other words. In addition to the inherent efficiency benefit, this will make you more predictable to others, lowering your transaction costs in interactions with them.
My problem is a bit more fundamental than that. If the premise of utilitarianism is that it is morally/ethically right for me to provide equal weighting to all people’s utility in my own utility function then I dispute the premise, not the procedure for working out the correct thing to do given the premise. The fact that utilitarianism can lead to moral/ethical decisions that conflict with my intuitions seems to me a reason to question the premises of utilitarianism rather than to question my intuitions.
Your intuitions will be biased to favoring a sibling over a stranger. Evolution has seen to that, i.e. kin selection.
Utilitarianism tries to maximize utility for all, regardless of relatedness. Even if you adjust the weightings for individuals based on likelihood of particular individuals having a greater impact on overall utility, you don’t (in general) get weightings that will match your intuitions.
I think it is unreasonable to expect your moral intuitions to ever approximate utilitarianism (or vice versa) unless you are making moral decisions about people you don’t know at all.
In reality, the money I spend on my two cats could be spent improving the happiness of many humans—humans that I don’t know at all who are living a long way away from me. Clearly I don’t apply utilitarianism to my moral decision to keep pets. I am still confused about how much I should let utilitarianism shift my emotionally-based lifestyle decisions.
I think you are construing the term “utilitarianism” too narrowly. The only reason you should be a utilitarian is if you intrinsically value the utility functions of other people. However, you don’t have to value the entire thing for the label to be appropriate. You still care about a large part of that murderer’s utility function, I assume, as well as that of non-murderers. Not classical utilitarianism, but the term still seems appropriate.
Utilitarianism seems a fairly unuseful ethical system if the utility function is subjective, either because individuals get to pick and choose which parts of others’ utility functions to respect or because individuals are allowed to choose subjective weights for others’ utilities. It would seem to degenerate into an impractical-to-implement system for everybody just justifying what they feel like doing anyway.
Well, assuming you get to make up your own utility function, yes. However, I don’t think this is the case. It seems more likely that we or born with utility functions or, rather, something we can construct a coherent utility function out of. Given the psychological unity of mankind, there is likely to be a lot of similarities in these utility functions across the species.
Didn’t you just suggest that we don’t have to value the entirety of a murderer’s utility function? There are certainly similarities between individual’s utility functions but they are not identical. That still doesn’t address the differential weighting issue either. It’s fairly clear that most people do in fact put greater weight on the utility of their family and friends than on that of strangers. I believe that is perfectly ethical and moral but it conflicts with a conception of utilitarianism that requires equal weights for all humans. If weights are not equal then utility is not universal and so utilitarianism does not provide a unique ‘right’ answer in the face of any ethical dilemma and so seems to me to be of limited value.
If you choose to reject any system that doesn’t provide a “unique ‘right’ answer” then you’re going to reject every system so far devised. Have you read Greene’s The Terrible, Horrible, No Good, Very Bad Truth about Morality and What to Do About it ?
However, I agree with you that any form of utilitarianism that has to have different weights when applied by different people is highly problematic. So we’re left with:
Pure selfless utilitarianism conflicts with our natural intuitions about morality when our friends and relatives are involved.
Untrained intuitive morality results in favoring humans unequally based on relationships and will appear unfair from a 3rd party viewpoint.
You can train yourself to some extent to find a utilitarian position more intuitive. If you work with just about any consistent system for long enough, it’ll start to feel more natural. I doubt that anyone who has any social or familial connections can be a perfect utilitarian all the time: there are always times when family or friends take priority over the rest of the world.
It seems to me that utilitarianism is trying to answer the wrong question. I don’t think there’s anything inherently wrong with individuals simply trying their best to satisfy their own unique utility functions (which generally include some concern for the utility functions of others but not equal concern for all others). I see morality and ethics as to a large extent not theoretical questions about what is ‘right’ but as empirical questions about what moral and ethical decision processes produce an evolutionarily stable strategy for co-existing with other agents with different goals.
On my view of morality it’s accepted that different agents will have different utilities for different outcomes and that there is not in general one outcome which all agents will agree is optimal. Morality is then the problem of developing a framework for resolving conflicts of interest in such a way that all the agents can accept the conflict resolution process as optimal. It is not a problem of achieving an outcome that all agents can agree is optimal. For humans, biological and cultural evolution have equipped us with a set of rules and heuristics for the resolution of conflicts of interest that have worked well enough to get us to where we are today. My interest in morality/ethics is in improving the process, not in some mythical quest for what is ‘right’.
I haven’t, but I’ve seen it mentioned before so I should check it out at some point. To be honest the title put me off when I first saw it linked because it makes it sound like it’s aimed at someone who still holds the naive view of morality that it’s about doing what is ‘right’.
I think we’re in agreement here.
For me the difficult questions arise when we try to take one universalizable moral principle and try to apply it at every level of organization, from the personal “what should I be doing with my time and energy at this moment?” to the public “what should person A be permitted/obliged to do?”
I was thinking about raising the question of utilitarianism being difficult to institute as an ESS when writing my previous post. To a certain extent, we (in democratic cultures with independent judiciary) train our intuitions to accept the idea of fairness as we grow up. Our parents, kindergarten and school teachers do their best to instill certain values. The fact that racism and sexism can become entrenched during formative years suggests to me that the equality and fairness principles I’ve grown up with can also be trained. We share a psychological architecture, but there is enough flexibility that we can train our moral intuitions (to some extent).
Utilitarianism is in principle universalizable, but is it practically universalizable at all decision levels? What training (or brainwashing) and threats of defector punishment would we need to implement to completely override our natural nepotism? To me this seems like an impractical goal.
I’ve been somewhat confused by the idea of anyone wanting to make all their decisions on utilitarian principles (even at the expense of familial obligations), so I wondered if I’ve been erecting an extreme utilitarian strawman. I think I have, and I’m seeing a glimmer of a solution to the confusion.
Given that we all have relationships we value, and to force ourselves to ignore those relationships in our daily activities represents negative utility, we cannot maximize utility with a moral system that requires everyone to treat everyone else as equal at all times and in all decisions. Any genuine utilitarian calculation must account for everyone’s emotional satisfaction from relationship activities.
(I feel less confused now. I’ll have to think about this some more.)
I have skimmed it and will return to it ASAP. Thank you very much for recommending it!
Yes. But if the “serial killer” is actually somone who enjoys helping others, who want to (and won’t harm anyone when they), commit suicide; are they really a bad person at all?
Is shooting them really better than shooting a random person?
Also, would the verdict on this question change if the people he killed had attempted but failed at suicide, or wanted to suicide but lacked the willpower to?