If you choose to reject any system that doesn’t provide a “unique ‘right’ answer” then you’re going to reject every system so far devised.
It seems to me that utilitarianism is trying to answer the wrong question. I don’t think there’s anything inherently wrong with individuals simply trying their best to satisfy their own unique utility functions (which generally include some concern for the utility functions of others but not equal concern for all others). I see morality and ethics as to a large extent not theoretical questions about what is ‘right’ but as empirical questions about what moral and ethical decision processes produce an evolutionarily stable strategy for co-existing with other agents with different goals.
On my view of morality it’s accepted that different agents will have different utilities for different outcomes and that there is not in general one outcome which all agents will agree is optimal. Morality is then the problem of developing a framework for resolving conflicts of interest in such a way that all the agents can accept the conflict resolution process as optimal. It is not a problem of achieving an outcome that all agents can agree is optimal. For humans, biological and cultural evolution have equipped us with a set of rules and heuristics for the resolution of conflicts of interest that have worked well enough to get us to where we are today. My interest in morality/ethics is in improving the process, not in some mythical quest for what is ‘right’.
Have you read Greene’s The Terrible, Horrible, No Good, Very Bad Truth about Morality and What to Do About it ?
I haven’t, but I’ve seen it mentioned before so I should check it out at some point. To be honest the title put me off when I first saw it linked because it makes it sound like it’s aimed at someone who still holds the naive view of morality that it’s about doing what is ‘right’.
Morality is then the problem of developing a framework for resolving conflicts of interest in such a way that all the agents can accept the conflict resolution process as optimal.
I think we’re in agreement here.
For me the difficult questions arise when we try to take one universalizable moral principle and try to apply it at every level of organization, from the personal “what should I be doing with my time and energy at this moment?” to the public “what should person A be permitted/obliged to do?”
I was thinking about raising the question of utilitarianism being difficult to institute as an ESS when writing my previous post. To a certain extent, we (in democratic cultures with independent judiciary) train our intuitions to accept the idea of fairness as we grow up. Our parents, kindergarten and school teachers do their best to instill certain values. The fact that racism and sexism can become entrenched during formative years suggests to me that the equality and fairness principles I’ve grown up with can also be trained. We share a psychological architecture, but there is enough flexibility that we can train our moral intuitions (to some extent).
Utilitarianism is in principle universalizable, but is it practically universalizable at all decision levels? What training (or brainwashing) and threats of defector punishment would we need to implement to completely override our natural nepotism? To me this seems like an impractical goal.
I’ve been somewhat confused by the idea of anyone wanting to make all their decisions on utilitarian principles (even at the expense of familial obligations), so I wondered if I’ve been erecting an extreme utilitarian strawman. I think I have, and I’m seeing a glimmer of a solution to the confusion.
Given that we all have relationships we value, and to force ourselves to ignore those relationships in our daily activities represents negative utility, we cannot maximize utility with a moral system that requires everyone to treat everyone else as equal at all times and in all decisions. Any genuine utilitarian calculation must account for everyone’s emotional satisfaction from relationship activities.
(I feel less confused now. I’ll have to think about this some more.)
It seems to me that utilitarianism is trying to answer the wrong question. I don’t think there’s anything inherently wrong with individuals simply trying their best to satisfy their own unique utility functions (which generally include some concern for the utility functions of others but not equal concern for all others). I see morality and ethics as to a large extent not theoretical questions about what is ‘right’ but as empirical questions about what moral and ethical decision processes produce an evolutionarily stable strategy for co-existing with other agents with different goals.
On my view of morality it’s accepted that different agents will have different utilities for different outcomes and that there is not in general one outcome which all agents will agree is optimal. Morality is then the problem of developing a framework for resolving conflicts of interest in such a way that all the agents can accept the conflict resolution process as optimal. It is not a problem of achieving an outcome that all agents can agree is optimal. For humans, biological and cultural evolution have equipped us with a set of rules and heuristics for the resolution of conflicts of interest that have worked well enough to get us to where we are today. My interest in morality/ethics is in improving the process, not in some mythical quest for what is ‘right’.
I haven’t, but I’ve seen it mentioned before so I should check it out at some point. To be honest the title put me off when I first saw it linked because it makes it sound like it’s aimed at someone who still holds the naive view of morality that it’s about doing what is ‘right’.
I think we’re in agreement here.
For me the difficult questions arise when we try to take one universalizable moral principle and try to apply it at every level of organization, from the personal “what should I be doing with my time and energy at this moment?” to the public “what should person A be permitted/obliged to do?”
I was thinking about raising the question of utilitarianism being difficult to institute as an ESS when writing my previous post. To a certain extent, we (in democratic cultures with independent judiciary) train our intuitions to accept the idea of fairness as we grow up. Our parents, kindergarten and school teachers do their best to instill certain values. The fact that racism and sexism can become entrenched during formative years suggests to me that the equality and fairness principles I’ve grown up with can also be trained. We share a psychological architecture, but there is enough flexibility that we can train our moral intuitions (to some extent).
Utilitarianism is in principle universalizable, but is it practically universalizable at all decision levels? What training (or brainwashing) and threats of defector punishment would we need to implement to completely override our natural nepotism? To me this seems like an impractical goal.
I’ve been somewhat confused by the idea of anyone wanting to make all their decisions on utilitarian principles (even at the expense of familial obligations), so I wondered if I’ve been erecting an extreme utilitarian strawman. I think I have, and I’m seeing a glimmer of a solution to the confusion.
Given that we all have relationships we value, and to force ourselves to ignore those relationships in our daily activities represents negative utility, we cannot maximize utility with a moral system that requires everyone to treat everyone else as equal at all times and in all decisions. Any genuine utilitarian calculation must account for everyone’s emotional satisfaction from relationship activities.
(I feel less confused now. I’ll have to think about this some more.)