Short answer? We don’t. Not really. Human beings have an evolved moral instinct. These evolutionary moral inclinations lead to us assigning a high value to human life and well-being. The closest internally coherent seeming ethical structure seems to be utilitarianism. (It sounds bad for a rationalist to admit “I value all human life equally, except I value myself and my children somewhat more.”)
But we are not really utilitarians. Our mental architecture doesn’t allow most of us to really treat every stranger on earth as though they are as valuable as ourselves or our own children.
It sounds bad for a rationalist to admit “I value all human life equally, except I value myself and my children somewhat more.”
Only because that’s logically contradictory. If you drop the equally part it sounds fine to me: “I value all human life, but I value some human lives more than others.”.
Utilitarianism is clearly not a good descriptive ethical theory (it does a poor job of describing or predicting how people actually behave) and I see no good reason to believe it is a good normative theory (a prescription for how people should behave).
‘Gut feeling’ is pretty much how I am evaluating it (and is a normative theory in a sense—what is good is what your intuition tells you is good). Utilitarianism says I should value all humans equally. That conflicts with my intuitive moral values. Given the conflict and my understanding of where my values come from I don’t see why I should accept what utilitarianism says is good over what I believe is good.
I think an ethical theory that seems to require all agents to reach the same conclusion on what the optimal outcome would be is doomed to failure. Ethics has to address the problem of what to do when two agents have conflicting desires rather than trying to wish away the conflict.
I think an ethical theory that seems to require all agents to reach the same conclusion on what the optimal outcome would be is doomed to failure.
What do you mean by an “ethical theory” here? Do you mean something purely descriptive, that tries to account for that side of human behavour that is to do with ethics? Or something normative, that sets out what a person should do?
Since it’s clear that people express different ideas about ethics from each other, a descriptive theory that said otherwise would be false as a matter of fact. However, normative theories are generally applicable to everyone through no other reason than that they don’t name specific individuals that they are about.
Utilitarian is a normative proposal, not a descriptive theory.
I mean a normative theory (or proposal if you prefer). Utilitarianism clearly fails as a descriptive theory (and I don’t think it’s proponents would generally disagree on that).
A normative theory that proposes everything would be fine if we could all just agree on the optimal outcome isn’t going to be much help in resolving the actual ethical problems facing humanity. It may be true that if we all were perfect altruists the system would be self consistent but we aren’t, I don’t see any realistic way of getting there from here, and I wouldn’t want to anyway (since it would conflict with my actual values).
A useful normative ethics has to work in a world where agents have differing (and sometimes conflicting) ideas of what is an optimal outcome. It has to help us cooperate to our mutual advantage despite imperfectly aligned goals rather than try and fix the problem by forcing the goals into alignment.
Utilitarianism is a theory for what you should do. It presupposes nothing about what anyone else’s ethical driver is. If cooperating with someone with different ethical goals furthers total utility from your perspective, utilitarianism commends it.
But we are not really utilitarians. Our mental architecture doesn’t allow most of us to really treat every stranger on earth as though they are as valuable as ourselves or our own children.
Shouldn’t this be evidence that utilitarianism isn’t close to the facts about ethics?
Shouldn’t this be evidence that utilitarianism isn’t close to the facts about ethics?
The rest of our brains are wired to give close-enough approximations quickly, not to reliably produce correct answers (cf. cognitive biases). It’s not a given that any coherent defition of ethics, even a correct one, should agree with our intuitive responses in all cases.
Short answer? We don’t. Not really. Human beings have an evolved moral instinct.
A longer answer looks at what ‘choice’ means a little more closely and wonders how tracable causality implies lack of choice in this instance and yet still manages to have any meaning whatsoever.
Short answer? We don’t. Not really. Human beings have an evolved moral instinct. These evolutionary moral inclinations lead to us assigning a high value to human life and well-being. The closest internally coherent seeming ethical structure seems to be utilitarianism. (It sounds bad for a rationalist to admit “I value all human life equally, except I value myself and my children somewhat more.”)
But we are not really utilitarians. Our mental architecture doesn’t allow most of us to really treat every stranger on earth as though they are as valuable as ourselves or our own children.
Only because that’s logically contradictory. If you drop the equally part it sounds fine to me: “I value all human life, but I value some human lives more than others.”.
Utilitarianism is clearly not a good descriptive ethical theory (it does a poor job of describing or predicting how people actually behave) and I see no good reason to believe it is a good normative theory (a prescription for how people should behave).
How are you going to evaluate a normative theory, except by comparison to another normative theory, or by gut feeling?
‘Gut feeling’ is pretty much how I am evaluating it (and is a normative theory in a sense—what is good is what your intuition tells you is good). Utilitarianism says I should value all humans equally. That conflicts with my intuitive moral values. Given the conflict and my understanding of where my values come from I don’t see why I should accept what utilitarianism says is good over what I believe is good.
I think an ethical theory that seems to require all agents to reach the same conclusion on what the optimal outcome would be is doomed to failure. Ethics has to address the problem of what to do when two agents have conflicting desires rather than trying to wish away the conflict.
What do you mean by an “ethical theory” here? Do you mean something purely descriptive, that tries to account for that side of human behavour that is to do with ethics? Or something normative, that sets out what a person should do?
Since it’s clear that people express different ideas about ethics from each other, a descriptive theory that said otherwise would be false as a matter of fact. However, normative theories are generally applicable to everyone through no other reason than that they don’t name specific individuals that they are about.
Utilitarian is a normative proposal, not a descriptive theory.
I mean a normative theory (or proposal if you prefer). Utilitarianism clearly fails as a descriptive theory (and I don’t think it’s proponents would generally disagree on that).
A normative theory that proposes everything would be fine if we could all just agree on the optimal outcome isn’t going to be much help in resolving the actual ethical problems facing humanity. It may be true that if we all were perfect altruists the system would be self consistent but we aren’t, I don’t see any realistic way of getting there from here, and I wouldn’t want to anyway (since it would conflict with my actual values).
A useful normative ethics has to work in a world where agents have differing (and sometimes conflicting) ideas of what is an optimal outcome. It has to help us cooperate to our mutual advantage despite imperfectly aligned goals rather than try and fix the problem by forcing the goals into alignment.
Utilitarianism is a theory for what you should do. It presupposes nothing about what anyone else’s ethical driver is. If cooperating with someone with different ethical goals furthers total utility from your perspective, utilitarianism commends it.
Shouldn’t this be evidence that utilitarianism isn’t close to the facts about ethics?
Only if you think we’re wired to be ethical.
I believe that was part of what knb was saying.
The rest of our brains are wired to give close-enough approximations quickly, not to reliably produce correct answers (cf. cognitive biases). It’s not a given that any coherent defition of ethics, even a correct one, should agree with our intuitive responses in all cases.
A longer answer looks at what ‘choice’ means a little more closely and wonders how tracable causality implies lack of choice in this instance and yet still manages to have any meaning whatsoever.