Didn’t you just suggest that we don’t have to value the entirety of a murderer’s utility function? There are certainly similarities between individual’s utility functions but they are not identical. That still doesn’t address the differential weighting issue either. It’s fairly clear that most people do in fact put greater weight on the utility of their family and friends than on that of strangers. I believe that is perfectly ethical and moral but it conflicts with a conception of utilitarianism that requires equal weights for all humans. If weights are not equal then utility is not universal and so utilitarianism does not provide a unique ‘right’ answer in the face of any ethical dilemma and so seems to me to be of limited value.
It’s fairly clear that most people do in fact put greater weight on the utility of their family and friends than on that of strangers. I believe that is perfectly ethical and moral but it conflicts with a conception of utilitarianism that requires equal weights for all humans. If weights are not equal then utility is not universal and so utilitarianism does not provide a unique ‘right’ answer in the face of any ethical dilemma and so seems to me to be of limited value.
However, I agree with you that any form of utilitarianism that has to have different weights when applied by different people is highly problematic. So we’re left with:
Pure selfless utilitarianism conflicts with our natural intuitions about morality when our friends and relatives are involved.
Untrained intuitive morality results in favoring humans unequally based on relationships and will appear unfair from a 3rd party viewpoint.
You can train yourself to some extent to find a utilitarian position more intuitive. If you work with just about any consistent system for long enough, it’ll start to feel more natural. I doubt that anyone who has any social or familial connections can be a perfect utilitarian all the time: there are always times when family or friends take priority over the rest of the world.
If you choose to reject any system that doesn’t provide a “unique ‘right’ answer” then you’re going to reject every system so far devised.
It seems to me that utilitarianism is trying to answer the wrong question. I don’t think there’s anything inherently wrong with individuals simply trying their best to satisfy their own unique utility functions (which generally include some concern for the utility functions of others but not equal concern for all others). I see morality and ethics as to a large extent not theoretical questions about what is ‘right’ but as empirical questions about what moral and ethical decision processes produce an evolutionarily stable strategy for co-existing with other agents with different goals.
On my view of morality it’s accepted that different agents will have different utilities for different outcomes and that there is not in general one outcome which all agents will agree is optimal. Morality is then the problem of developing a framework for resolving conflicts of interest in such a way that all the agents can accept the conflict resolution process as optimal. It is not a problem of achieving an outcome that all agents can agree is optimal. For humans, biological and cultural evolution have equipped us with a set of rules and heuristics for the resolution of conflicts of interest that have worked well enough to get us to where we are today. My interest in morality/ethics is in improving the process, not in some mythical quest for what is ‘right’.
Have you read Greene’s The Terrible, Horrible, No Good, Very Bad Truth about Morality and What to Do About it ?
I haven’t, but I’ve seen it mentioned before so I should check it out at some point. To be honest the title put me off when I first saw it linked because it makes it sound like it’s aimed at someone who still holds the naive view of morality that it’s about doing what is ‘right’.
Morality is then the problem of developing a framework for resolving conflicts of interest in such a way that all the agents can accept the conflict resolution process as optimal.
I think we’re in agreement here.
For me the difficult questions arise when we try to take one universalizable moral principle and try to apply it at every level of organization, from the personal “what should I be doing with my time and energy at this moment?” to the public “what should person A be permitted/obliged to do?”
I was thinking about raising the question of utilitarianism being difficult to institute as an ESS when writing my previous post. To a certain extent, we (in democratic cultures with independent judiciary) train our intuitions to accept the idea of fairness as we grow up. Our parents, kindergarten and school teachers do their best to instill certain values. The fact that racism and sexism can become entrenched during formative years suggests to me that the equality and fairness principles I’ve grown up with can also be trained. We share a psychological architecture, but there is enough flexibility that we can train our moral intuitions (to some extent).
Utilitarianism is in principle universalizable, but is it practically universalizable at all decision levels? What training (or brainwashing) and threats of defector punishment would we need to implement to completely override our natural nepotism? To me this seems like an impractical goal.
I’ve been somewhat confused by the idea of anyone wanting to make all their decisions on utilitarian principles (even at the expense of familial obligations), so I wondered if I’ve been erecting an extreme utilitarian strawman. I think I have, and I’m seeing a glimmer of a solution to the confusion.
Given that we all have relationships we value, and to force ourselves to ignore those relationships in our daily activities represents negative utility, we cannot maximize utility with a moral system that requires everyone to treat everyone else as equal at all times and in all decisions. Any genuine utilitarian calculation must account for everyone’s emotional satisfaction from relationship activities.
(I feel less confused now. I’ll have to think about this some more.)
Didn’t you just suggest that we don’t have to value the entirety of a murderer’s utility function? There are certainly similarities between individual’s utility functions but they are not identical. That still doesn’t address the differential weighting issue either. It’s fairly clear that most people do in fact put greater weight on the utility of their family and friends than on that of strangers. I believe that is perfectly ethical and moral but it conflicts with a conception of utilitarianism that requires equal weights for all humans. If weights are not equal then utility is not universal and so utilitarianism does not provide a unique ‘right’ answer in the face of any ethical dilemma and so seems to me to be of limited value.
If you choose to reject any system that doesn’t provide a “unique ‘right’ answer” then you’re going to reject every system so far devised. Have you read Greene’s The Terrible, Horrible, No Good, Very Bad Truth about Morality and What to Do About it ?
However, I agree with you that any form of utilitarianism that has to have different weights when applied by different people is highly problematic. So we’re left with:
Pure selfless utilitarianism conflicts with our natural intuitions about morality when our friends and relatives are involved.
Untrained intuitive morality results in favoring humans unequally based on relationships and will appear unfair from a 3rd party viewpoint.
You can train yourself to some extent to find a utilitarian position more intuitive. If you work with just about any consistent system for long enough, it’ll start to feel more natural. I doubt that anyone who has any social or familial connections can be a perfect utilitarian all the time: there are always times when family or friends take priority over the rest of the world.
It seems to me that utilitarianism is trying to answer the wrong question. I don’t think there’s anything inherently wrong with individuals simply trying their best to satisfy their own unique utility functions (which generally include some concern for the utility functions of others but not equal concern for all others). I see morality and ethics as to a large extent not theoretical questions about what is ‘right’ but as empirical questions about what moral and ethical decision processes produce an evolutionarily stable strategy for co-existing with other agents with different goals.
On my view of morality it’s accepted that different agents will have different utilities for different outcomes and that there is not in general one outcome which all agents will agree is optimal. Morality is then the problem of developing a framework for resolving conflicts of interest in such a way that all the agents can accept the conflict resolution process as optimal. It is not a problem of achieving an outcome that all agents can agree is optimal. For humans, biological and cultural evolution have equipped us with a set of rules and heuristics for the resolution of conflicts of interest that have worked well enough to get us to where we are today. My interest in morality/ethics is in improving the process, not in some mythical quest for what is ‘right’.
I haven’t, but I’ve seen it mentioned before so I should check it out at some point. To be honest the title put me off when I first saw it linked because it makes it sound like it’s aimed at someone who still holds the naive view of morality that it’s about doing what is ‘right’.
I think we’re in agreement here.
For me the difficult questions arise when we try to take one universalizable moral principle and try to apply it at every level of organization, from the personal “what should I be doing with my time and energy at this moment?” to the public “what should person A be permitted/obliged to do?”
I was thinking about raising the question of utilitarianism being difficult to institute as an ESS when writing my previous post. To a certain extent, we (in democratic cultures with independent judiciary) train our intuitions to accept the idea of fairness as we grow up. Our parents, kindergarten and school teachers do their best to instill certain values. The fact that racism and sexism can become entrenched during formative years suggests to me that the equality and fairness principles I’ve grown up with can also be trained. We share a psychological architecture, but there is enough flexibility that we can train our moral intuitions (to some extent).
Utilitarianism is in principle universalizable, but is it practically universalizable at all decision levels? What training (or brainwashing) and threats of defector punishment would we need to implement to completely override our natural nepotism? To me this seems like an impractical goal.
I’ve been somewhat confused by the idea of anyone wanting to make all their decisions on utilitarian principles (even at the expense of familial obligations), so I wondered if I’ve been erecting an extreme utilitarian strawman. I think I have, and I’m seeing a glimmer of a solution to the confusion.
Given that we all have relationships we value, and to force ourselves to ignore those relationships in our daily activities represents negative utility, we cannot maximize utility with a moral system that requires everyone to treat everyone else as equal at all times and in all decisions. Any genuine utilitarian calculation must account for everyone’s emotional satisfaction from relationship activities.
(I feel less confused now. I’ll have to think about this some more.)
I have skimmed it and will return to it ASAP. Thank you very much for recommending it!