‘Gut feeling’ is pretty much how I am evaluating it (and is a normative theory in a sense—what is good is what your intuition tells you is good). Utilitarianism says I should value all humans equally. That conflicts with my intuitive moral values. Given the conflict and my understanding of where my values come from I don’t see why I should accept what utilitarianism says is good over what I believe is good.
I think an ethical theory that seems to require all agents to reach the same conclusion on what the optimal outcome would be is doomed to failure. Ethics has to address the problem of what to do when two agents have conflicting desires rather than trying to wish away the conflict.
I think an ethical theory that seems to require all agents to reach the same conclusion on what the optimal outcome would be is doomed to failure.
What do you mean by an “ethical theory” here? Do you mean something purely descriptive, that tries to account for that side of human behavour that is to do with ethics? Or something normative, that sets out what a person should do?
Since it’s clear that people express different ideas about ethics from each other, a descriptive theory that said otherwise would be false as a matter of fact. However, normative theories are generally applicable to everyone through no other reason than that they don’t name specific individuals that they are about.
Utilitarian is a normative proposal, not a descriptive theory.
I mean a normative theory (or proposal if you prefer). Utilitarianism clearly fails as a descriptive theory (and I don’t think it’s proponents would generally disagree on that).
A normative theory that proposes everything would be fine if we could all just agree on the optimal outcome isn’t going to be much help in resolving the actual ethical problems facing humanity. It may be true that if we all were perfect altruists the system would be self consistent but we aren’t, I don’t see any realistic way of getting there from here, and I wouldn’t want to anyway (since it would conflict with my actual values).
A useful normative ethics has to work in a world where agents have differing (and sometimes conflicting) ideas of what is an optimal outcome. It has to help us cooperate to our mutual advantage despite imperfectly aligned goals rather than try and fix the problem by forcing the goals into alignment.
Utilitarianism is a theory for what you should do. It presupposes nothing about what anyone else’s ethical driver is. If cooperating with someone with different ethical goals furthers total utility from your perspective, utilitarianism commends it.
‘Gut feeling’ is pretty much how I am evaluating it (and is a normative theory in a sense—what is good is what your intuition tells you is good). Utilitarianism says I should value all humans equally. That conflicts with my intuitive moral values. Given the conflict and my understanding of where my values come from I don’t see why I should accept what utilitarianism says is good over what I believe is good.
I think an ethical theory that seems to require all agents to reach the same conclusion on what the optimal outcome would be is doomed to failure. Ethics has to address the problem of what to do when two agents have conflicting desires rather than trying to wish away the conflict.
What do you mean by an “ethical theory” here? Do you mean something purely descriptive, that tries to account for that side of human behavour that is to do with ethics? Or something normative, that sets out what a person should do?
Since it’s clear that people express different ideas about ethics from each other, a descriptive theory that said otherwise would be false as a matter of fact. However, normative theories are generally applicable to everyone through no other reason than that they don’t name specific individuals that they are about.
Utilitarian is a normative proposal, not a descriptive theory.
I mean a normative theory (or proposal if you prefer). Utilitarianism clearly fails as a descriptive theory (and I don’t think it’s proponents would generally disagree on that).
A normative theory that proposes everything would be fine if we could all just agree on the optimal outcome isn’t going to be much help in resolving the actual ethical problems facing humanity. It may be true that if we all were perfect altruists the system would be self consistent but we aren’t, I don’t see any realistic way of getting there from here, and I wouldn’t want to anyway (since it would conflict with my actual values).
A useful normative ethics has to work in a world where agents have differing (and sometimes conflicting) ideas of what is an optimal outcome. It has to help us cooperate to our mutual advantage despite imperfectly aligned goals rather than try and fix the problem by forcing the goals into alignment.
Utilitarianism is a theory for what you should do. It presupposes nothing about what anyone else’s ethical driver is. If cooperating with someone with different ethical goals furthers total utility from your perspective, utilitarianism commends it.