This conversation sounds like TAG uses utilitarianism to mean classic utilitarianism, where pains and pleasures are the only consequences that we care about (and rights violations are not), and like you are using it to refer to decision-theoretic utilitarianism, where the consequences can include rights violations as well.
I object that there is any real difference between the two. Classic utilitarianism is “count up all the good, subtract all the bad.” That’s exactly how I would describe decision-theoretic utilitarianism. Now the original proponents of utilitarianism also didn’t give much negative weight to rights violations, but that’s a complaint against their utility function, not utilitarianism per se.
But regardless, I think you did identify and elucidate better than I did the core disagreement here. I hope it’s resolved for TAG.
It sounds like “decision theoretic utilitarianism” was something invented here.
I think hybrid approaches to ethics have more to offer than purist approaches..and also that it is assists communication to label them as such.
Edit:
Actually , it’s worse than that. As Smiffnoy correctly states, maximising your personal utility without regard to anybody else isn’t an ethical theory at all, so it continues the confusion to label it as such.
Actually , it’s worse than that. As Smiffnoy correctly states, maximising your personal utility without regard to anybody else isn’t an ethical theory at all, so it continues the confusion to label it as such.
That only describes a solipsist or sociopath’s utility function. All things being equal, I would like for you to be happy, strange person on the Internet who is reading this. Maximizing my own utility function means preferring outcomes where everyone is happy, because I value those outcomes.
Also Smiffnoy seems to ignore or be ignorant of game theory and Nash equilibria, which shows that under the right conditions purely selfish people acting rationally ought to cooperate to create outcomes that are the best achievable for everyone. (Which far from being an ivory tower theory, it describes modern capitalist society in a nutshell.)
There’s your problem. We don’t say that two things are the same if they happen to coincide under exceptional circumstances, we say they are the same if they coincide under every possible circumstance.
Ethical utilitarianism and utility based decision theory don’t coincide when someone is only a little more altruistic than a sociopath. Utilitarianism is notorious for being very demanding, so having a personal UF that coincides with the aggregate used by utilitarianism requires Ghandi level altruism., and is therefore improbable.
Likewise, decision theory can imply a CC equilibrium, but does not do so in every case.
Consequences for whom? If I violate your rights, that’s not a consequence for me. That’s one of the ways in which ethical utilitarianism separates from personal decision theory.
I don’t understand the question. “For whom” doesn’t matter. If I take an action, the world that results as a consequence has an entity who feels their rights are violated. When I sum over the utility of that world, that rights violation is a negative term, if I’m the kind of person cares about people’s rights (which I am, but is a *separate* issue).
For “the ends don’t justify the means” to mean something, it implies that there is something of intrinsic negative morality in the actions I take, even if the results are identical. I argue that this is nonsense—if there was any real, non-deontological difference you could point to, then that would be part of the utility calculation.
It matters because your ethical/decision theory will give different results depending on whose utilities you are taking into account.
If I take an action, the world that results as a consequence has an entity who feels their rights are violated. When I sum over the utility of that world, that rights violation is a negative term, if I’m the kind of person cares about people’s rights (which I am, but is a separate issue).
It’s the heart of the issue. If you don’t care about their rights, but they do, then you will violate their rights.
If there is some objective notion of the negative utility that comes from a rights violation, you will violate their
their rights unless your personal UF happens to be exactly aligned with the objective value.
For “the ends don’t justify the means” to mean something, it implies that there is something of intrinsic negative morality in the actions I take, even if the results are identical
You can’t calculate what the ultimate results are.
You have to use heuristics. That’s why there is a real paradox about the trolley problem. The local (necessarily) calculation says, that killing the fat man saves lives, the heuristic says “dont kill people”
Utilitarianism uses a version of global utility that is based on summing individual utilities.
If you could show that some notion of rights emerges from summation of individual utility, that would be a remarkable result, effectively resolving the Trolley problem.
OTOH, there is a loose sense in which rules have some kind of distributed utility, but if that not based on summation of individual utilities, you are talking about something that isn’t utilitarianism, as usually defined.
That would imply that means are always morally neutral, which is not the case.
It’s a direct consequence of utilitarian morality.
What moral impact is there for means, other than their total consequences?
Maybe utilitarianism is wrong. If means involve rights violations, maybe they are not justified by their consequences.
You’re not applying it correctly. Rights violations are among the consequences. They’re summed as part of the utilitarian equation.
This conversation sounds like TAG uses utilitarianism to mean classic utilitarianism, where pains and pleasures are the only consequences that we care about (and rights violations are not), and like you are using it to refer to decision-theoretic utilitarianism, where the consequences can include rights violations as well.
I object that there is any real difference between the two. Classic utilitarianism is “count up all the good, subtract all the bad.” That’s exactly how I would describe decision-theoretic utilitarianism. Now the original proponents of utilitarianism also didn’t give much negative weight to rights violations, but that’s a complaint against their utility function, not utilitarianism per se.
But regardless, I think you did identify and elucidate better than I did the core disagreement here. I hope it’s resolved for TAG.
It sounds like “decision theoretic utilitarianism” was something invented here.
I think hybrid approaches to ethics have more to offer than purist approaches..and also that it is assists communication to label them as such.
Edit:
Actually , it’s worse than that. As Smiffnoy correctly states, maximising your personal utility without regard to anybody else isn’t an ethical theory at all, so it continues the confusion to label it as such.
That only describes a solipsist or sociopath’s utility function. All things being equal, I would like for you to be happy, strange person on the Internet who is reading this. Maximizing my own utility function means preferring outcomes where everyone is happy, because I value those outcomes.
Also Smiffnoy seems to ignore or be ignorant of game theory and Nash equilibria, which shows that under the right conditions purely selfish people acting rationally ought to cooperate to create outcomes that are the best achievable for everyone. (Which far from being an ivory tower theory, it describes modern capitalist society in a nutshell.)
There’s your problem. We don’t say that two things are the same if they happen to coincide under exceptional circumstances, we say they are the same if they coincide under every possible circumstance.
Ethical utilitarianism and utility based decision theory don’t coincide when someone is only a little more altruistic than a sociopath. Utilitarianism is notorious for being very demanding, so having a personal UF that coincides with the aggregate used by utilitarianism requires Ghandi level altruism., and is therefore improbable.
Likewise, decision theory can imply a CC equilibrium, but does not do so in every case.
Consequences for whom? If I violate your rights, that’s not a consequence for me. That’s one of the ways in which ethical utilitarianism separates from personal decision theory.
I don’t understand the question. “For whom” doesn’t matter. If I take an action, the world that results as a consequence has an entity who feels their rights are violated. When I sum over the utility of that world, that rights violation is a negative term, if I’m the kind of person cares about people’s rights (which I am, but is a *separate* issue).
For “the ends don’t justify the means” to mean something, it implies that there is something of intrinsic negative morality in the actions I take, even if the results are identical. I argue that this is nonsense—if there was any real, non-deontological difference you could point to, then that would be part of the utility calculation.
If I feel that I have a right to a swimming pool, does your failure to buy me a swimming pool mean that a right has been violated?
It matters because your ethical/decision theory will give different results depending on whose utilities you are taking into account.
It’s the heart of the issue. If you don’t care about their rights, but they do, then you will violate their rights.
If there is some objective notion of the negative utility that comes from a rights violation, you will violate their their rights unless your personal UF happens to be exactly aligned with the objective value.
You can’t calculate what the ultimate results are. You have to use heuristics. That’s why there is a real paradox about the trolley problem. The local (necessarily) calculation says, that killing the fat man saves lives, the heuristic says “dont kill people”
Utilitarianism uses a version of global utility that is based on summing individual utilities.
If you could show that some notion of rights emerges from summation of individual utility, that would be a remarkable result, effectively resolving the Trolley problem.
OTOH, there is a loose sense in which rules have some kind of distributed utility, but if that not based on summation of individual utilities, you are talking about something that isn’t utilitarianism, as usually defined.