it just takes the understanding that five lives are, all things being equal, more important than four lives.
Your examples rely too heavily on “intuitively right” and ceteris paribus conditioning. It is not always the case that five are more important than four and the mere idea has been debunkedseveral times.
if people agree to judge actions by how well they turn out general human preference
What is the method you use to determine how things will turn out?
similarity can probably make them agree on the best action even without complete agreement on a rigorous definition of “well”
You know the Nirvana fallacy and the fallacy of needing infinite certainty before accepting something as probably true? How the solution is to accept that a claim with 75% probability is pretty likely to be true, and that if you need to make a choice, you should choose based on the 75% claim rather than the alternative? You know how if you refuse to accept the 75% claim because you’re virtuously “waiting for more evidence”, you’ll very likely end up just accepting a claim with even less evidence that you’re personally biased towards?
Morality works the same way. Even if you can’t prove that one situation will always have higher utility than another, you’ve still got to go on the balance of probabilities, because that’s all you’ve got.
The last time I used consequentialism in a moral discussion was (thinks back) on health care. I was arguing that when you have limited health care resources, it’s sometimes okay to deny care to a “hopeless” case if it can be proven the resources that would be spent on that care could be used to save more people later. So you may refuse to treat one person with a “hopeless” disease that costs $500,000 to treat in order to be able to treat ten people with diseases that cost $50,000.
Now, yes, one of the people involved could be a utility monster. One of the people involved could grow up to be Hitler, or Gandhi, or Ray Kurzweil. Everyone in the example might really be a brain in a vat, or a p-zombie, or Omega, or an Ebborian with constantly splitting quantum mind-sheets. But if you were an actual health care administrator in an actual hospital, would you take the decision that probably fails to save one person, or the decision that probably saves ten people? Or would you say “I have no evidence to make the decision either way”, wash your hands of it, and flip a coin?
In this case, it doesn’t matter how you define utility; for any person who prefers life to death, there’s only one way to proceed. Yet there are many people in the real world, both hospital administrators and especially voters, who would support the other decision—the one where we give one person useless care now but let ten potentially curable people die later—with all their hearts. Our first job is to spread enough consequentialism to get people to stop doing this sort of thing. After that, we can argue about the technical details all we want. We can stop shooting ourselves in the foot even before we have a complete theory of ballistics.
Yet there are many people in the real world, both hospital administrators and especially voters, who would support the other decision—the one where we give one person useless care now but let ten potentially curable people die later—with all their hearts. Our first job is to spread enough consequentialism to get people to stop doing this sort of thing. After that, we can argue about the technical details all we want. We can stop shooting ourselves in the foot even before we have a complete theory of ballistics.
There should be a top-level post to this effect. It belongs as part of the standard introduction to rationality.
I can see how it’s related, but that’s not what I was trying to think. The main points that drew me out were “spread consequentialism” and “first, stop shooting ourselves in the foot.”
Your examples rely too heavily on “intuitively right” and ceteris paribus conditioning. It is not always the case that five are more important than four
If there is literally nothing distinguishing the two scenarios except for the number of people—you have no information regarding who those people are, how their life or death will affect others in the future (including the population issues you cite), their quality of life or anything else—then it matters not whether it’s 5 vs. 4 or a million vs. 4. Adding a million people at quality of life C or preventing their deaths is better than the same with four, and any consequentialist system of morality that suggests otherwise contains either a contradiction or an arbitrary inflection point in the value of a human life.
and the mere idea has been debunked several times.
The utility monster citation is fascinating because of a) how widely it diverges from all available evidence about human psychology, both with diminishing returns and the similarity of human valences, b) how much improved the thought experiment is by substituting “human” (a thing whose utility I care about) for “monster” (for which I do not), and c) how straightforward it really seems: if it were really the case that there were something 100 times more valuable than my life, I certainly ought to sacrifice my life for that, if I am a consequentialist.
I’ll ignore the assumption made by the second article that human population growth is truly exponential rather than logistic. It further assumes—contrary to the utility monster, I note—that we ought to be using average utilitarianism. Even then, if all things were equal, which the article stipulates they are not, more humans would still be better. The article is simply arguing that that state of affairs does not hold, which may be true. Consequentialism is, after all, about the real world, not only about ceteris paribus situations.
and any consequentialist system of morality that suggests otherwise contains either a contradiction or an arbitrary inflection point in the value of a human life.
(Or a constant value for human life but with positive utility assigned to the probability of extinction from independent, chance deaths that follows an even more arbitrary somewhat bizarre function.)
Your examples rely too heavily on “intuitively right” and ceteris paribus conditioning. It is not always the case that five are more important than four and the mere idea has been debunked several times.
What is the method you use to determine how things will turn out?
Does consensus make decisions correct?
You know the Nirvana fallacy and the fallacy of needing infinite certainty before accepting something as probably true? How the solution is to accept that a claim with 75% probability is pretty likely to be true, and that if you need to make a choice, you should choose based on the 75% claim rather than the alternative? You know how if you refuse to accept the 75% claim because you’re virtuously “waiting for more evidence”, you’ll very likely end up just accepting a claim with even less evidence that you’re personally biased towards?
Morality works the same way. Even if you can’t prove that one situation will always have higher utility than another, you’ve still got to go on the balance of probabilities, because that’s all you’ve got.
The last time I used consequentialism in a moral discussion was (thinks back) on health care. I was arguing that when you have limited health care resources, it’s sometimes okay to deny care to a “hopeless” case if it can be proven the resources that would be spent on that care could be used to save more people later. So you may refuse to treat one person with a “hopeless” disease that costs $500,000 to treat in order to be able to treat ten people with diseases that cost $50,000.
Now, yes, one of the people involved could be a utility monster. One of the people involved could grow up to be Hitler, or Gandhi, or Ray Kurzweil. Everyone in the example might really be a brain in a vat, or a p-zombie, or Omega, or an Ebborian with constantly splitting quantum mind-sheets. But if you were an actual health care administrator in an actual hospital, would you take the decision that probably fails to save one person, or the decision that probably saves ten people? Or would you say “I have no evidence to make the decision either way”, wash your hands of it, and flip a coin?
In this case, it doesn’t matter how you define utility; for any person who prefers life to death, there’s only one way to proceed. Yet there are many people in the real world, both hospital administrators and especially voters, who would support the other decision—the one where we give one person useless care now but let ten potentially curable people die later—with all their hearts. Our first job is to spread enough consequentialism to get people to stop doing this sort of thing. After that, we can argue about the technical details all we want. We can stop shooting ourselves in the foot even before we have a complete theory of ballistics.
There should be a top-level post to this effect. It belongs as part of the standard introduction to rationality.
Here is a related post: http://lesswrong.com/lw/65/money_the_unit_of_caring/ I’m sure there are others.
I can see how it’s related, but that’s not what I was trying to think. The main points that drew me out were “spread consequentialism” and “first, stop shooting ourselves in the foot.”
I don’t know. It’s gone.
If there is literally nothing distinguishing the two scenarios except for the number of people—you have no information regarding who those people are, how their life or death will affect others in the future (including the population issues you cite), their quality of life or anything else—then it matters not whether it’s 5 vs. 4 or a million vs. 4. Adding a million people at quality of life C or preventing their deaths is better than the same with four, and any consequentialist system of morality that suggests otherwise contains either a contradiction or an arbitrary inflection point in the value of a human life.
The utility monster citation is fascinating because of a) how widely it diverges from all available evidence about human psychology, both with diminishing returns and the similarity of human valences, b) how much improved the thought experiment is by substituting “human” (a thing whose utility I care about) for “monster” (for which I do not), and c) how straightforward it really seems: if it were really the case that there were something 100 times more valuable than my life, I certainly ought to sacrifice my life for that, if I am a consequentialist.
I’ll ignore the assumption made by the second article that human population growth is truly exponential rather than logistic. It further assumes—contrary to the utility monster, I note—that we ought to be using average utilitarianism. Even then, if all things were equal, which the article stipulates they are not, more humans would still be better. The article is simply arguing that that state of affairs does not hold, which may be true. Consequentialism is, after all, about the real world, not only about ceteris paribus situations.
(Or a constant value for human life but with positive utility assigned to the probability of extinction from independent, chance deaths that follows an even more arbitrary somewhat bizarre function.)
Bayes’ rule.
Of course not, don’t make straw men. Consensus is simply the best indicator of rightness we know of so far.