Under utilitarianism, human farming for research purposes and organ harvesting would be justified if it benefited enough future persons.
Under utilitarianism the ideal life is one spent barely subsisting while giving away all material wealth to effective altruism/charity. (reason being—unless you are barely subsisting, there is someone who would benefit from your wealth more than you).
Also there is no way to compare interpersonal utility. There is a sense in which I might prefer A to B, but there is no sense in which I can prefer A more than you prefer B. We could vote, or bid money but neither of these results in a satisfactory ethical theory.
Also there is no way to compare interpersonal utility. There is a sense in which I might prefer A to B, but there is no sense in which I can prefer A more than you prefer B. We could vote, or bid money but neither of these results in a satisfactory ethical theory.
Perhaps not with utility theory’s usual definition of “prefer”, but in practice there is a commonsense way in which I can prefer A more than you prefer B, since we’re both humans with almost identical brain architecture.
Interesting, so your utilitarianism depends on agents having similar minds, it doesn’t try to a be a universal ethical theory for sapient beings.
What exactly is that way in which you can prefer something more than I can? It is not common sense to me, unless you are talking about hedonic utilitarianism. Are you using intensity of desire or intensity of satisfaction as a criteria? Neither one seems satisfactory. People’s preferences do not always (or even mostly) align with either. I suppose what I’m asking is for you to provide a systematic way of comparing interpersonal utility.
If I say “I prefer not to be tortured more than you prefer a popsicle”, any sane human would agree. This is the commonsense way in which utility can be compared between humans. Of course, it isn’t perfect, but we could easily imagine ways to make it better, say by running some regression algorithms on brain-scans of humans desiring popsicles and humans desiring not-to-be-tortured, and extrapolating to other human minds. (That would still be imperfect, but we can make it arbitrarily good.)
This isn’t just necessary if you’re a utilitarian, it’s necessary if your moral system in any way involves tradeoffs between humans’ preferences, i.e. it’s necessary for pretty much every human who’s ever lived.
So you are a hedonic utilitarian? You think that morality can be reduced to intensity of desire? I already pointed out that human preferences do not reduce to intensity of desire.
I’m not any sort of utilitarian, and that has nothing to do with my point, which is that there obviously is a sense in which I can prefer A more than you prefer B.
that’s more like being conditional that we cooperate. If my enemy would say that I could find it offensive and it doesn’ty compel me to change my actions. If you try to use utlitarian theory to (en)force a cooperation the argument doesn’t go throught.
Under utilitarianism, human farming for research purposes and organ harvesting would be justified if it benefited enough future persons.
Under utilitarianism the ideal life is one spent barely subsisting while giving away all material wealth to effective altruism/charity. (reason being—unless you are barely subsisting, there is someone who would benefit from your wealth more than you).
Also there is no way to compare interpersonal utility. There is a sense in which I might prefer A to B, but there is no sense in which I can prefer A more than you prefer B. We could vote, or bid money but neither of these results in a satisfactory ethical theory.
Perhaps not with utility theory’s usual definition of “prefer”, but in practice there is a commonsense way in which I can prefer A more than you prefer B, since we’re both humans with almost identical brain architecture.
Interesting, so your utilitarianism depends on agents having similar minds, it doesn’t try to a be a universal ethical theory for sapient beings.
What exactly is that way in which you can prefer something more than I can? It is not common sense to me, unless you are talking about hedonic utilitarianism. Are you using intensity of desire or intensity of satisfaction as a criteria? Neither one seems satisfactory. People’s preferences do not always (or even mostly) align with either. I suppose what I’m asking is for you to provide a systematic way of comparing interpersonal utility.
If I say “I prefer not to be tortured more than you prefer a popsicle”, any sane human would agree. This is the commonsense way in which utility can be compared between humans. Of course, it isn’t perfect, but we could easily imagine ways to make it better, say by running some regression algorithms on brain-scans of humans desiring popsicles and humans desiring not-to-be-tortured, and extrapolating to other human minds. (That would still be imperfect, but we can make it arbitrarily good.)
This isn’t just necessary if you’re a utilitarian, it’s necessary if your moral system in any way involves tradeoffs between humans’ preferences, i.e. it’s necessary for pretty much every human who’s ever lived.
So you are a hedonic utilitarian? You think that morality can be reduced to intensity of desire? I already pointed out that human preferences do not reduce to intensity of desire.
I’m not any sort of utilitarian, and that has nothing to do with my point, which is that there obviously is a sense in which I can prefer A more than you prefer B.
that’s more like being conditional that we cooperate. If my enemy would say that I could find it offensive and it doesn’ty compel me to change my actions. If you try to use utlitarian theory to (en)force a cooperation the argument doesn’t go throught.