What are the experiences that we are trying to explain with our ethical theories? Why bother with ethics at all? What is the mystery we are trying to solve? The only answer I can think of is our ethical intuitions.
We are trying to find out what kinds of things are to be done. Ethical intuitions are some of the observations about what things should be done, but the theory needs to describe what things should be done, not (just) explain ethical intuitions. This is a point where e.g. knowledge about scope insensitivity trumps the raw response of ethical intuition (you acknowledge the problem in the post, but without shifting the focus from observation to the observed).
So our ethical intuitions are scope insensitive. But how do we know how to correct for the insensitivity? Maybe maybe the value of an action increases linearly as it increases in scope. Maybe it increases exponentially. Maybe the right thing to do is average the utility per person. What possible evidence is there to answer this question? For that matter, why should we think that ethics shouldn’t be scope insensitive?
The idea is that when humans are confronted with ethical choices, they often have qualitative feelings (murder is bad, saving lives is good) upon which they try and make their ethical decision. This seems more natural, whereas applying mathematics to ethics seems somewhat repugnant. From Eliezer’s post:
My favorite anecdote along these lines—though my books are packed at the moment, so no citation for now—comes from a team of researchers who evaluated the effectiveness of a certain project, calculating the cost per life saved, and recommended to the government that the project be implemented because it was cost-effective. The governmental agency rejected the report because, they said, you couldn’t put a dollar value on human life. After rejecting the report, the agency decided not to implement the measure.
My understanding of the gist of the post is that Eliezer argues that if he can demonstrate that the “refusal to multiply” leads to inconsistent ethical choices, then you must admit you should get over your emotional reluctance to apply quantitative reasoning, in order to make sure you would choose the correct ethical choice.
The scope insensitivity comes from the fact that we consider it extremely ethical to save a child, but not really twice as ethical to save two children… you can see this clearly in action movies where you’re supposed to be so thrilled that one particular child was saved when you just saw a whole train, presumably with many families as well as those nearly ethically worthless single men, fall into an abyss.
I’ll add my thoughts regarding the question, “why should we think that ethics shouldn’t be scope insensitive?” separately:
“Circular Altriusm” suggests that we shouldn’t expect that ethical intuition is scope sensitive. But should we expect that ethics is scope sensitive?
I think that ‘multiplication’ might be good—and pragmatically required—for deciding what we should do. However, this doesn’t guarantee that it is “ethical” to do so (this distinction only coherent for some subset of definitions of ethical, obviously). Ethics might require that you “never compromise”, in which case ethics would just provide an ideal, unobtainable arrow for how the world should be, and then it is up to us to decide how to pragmatically get the world there, or closer to there.
Yes, this is one of the big questions. In Eliezer’s scope insensitivity post, he implied that goodness is proportional to absolute number. But Eliezer has also said that he is an average utilitarian. So actually he should see what he called “scope insensitivity” as pretty near the right thing to do. If someone has no idea how many migrating birds there are, and you ask them, “How much would you pay to save 2000 birds?”, they’re going to seize on 2000 as a cue to indicate how many migrating birds there are. It might seem perfectly reasonable to such a person to suppose there are about 10,000 migrating birds total. But if you asked them, “How much would you pay to save 200,000 migrating birds?” they might suppose there are about 1,000,000 migrating birds. If they value their results by the average good done per bird existing, it would be correct for them to be willing to pay the same amount in both cases.
So what you’re asking amounts to asking questions like whether you want to maximize average utility, total utility, or maximum utility.
I don’t think that most people average over all reality. We treated the last 3,000 wolves in North America with much more consideration than we treated the initial 30,000,000 wolves in North America. It’s as if we have a budget per-species; as if we partition reality before applying our utility function.
(People like to partition their utility function, because they don’t like to confront questions like, “How many beavers are worth a human’s life?”, or, “How many lattes would you give up to feed a person in Haiti for a week?”)
Treating rare animals as especially valuable could be a result of having a (partitioned) utility function that discounts large numbers of individuals. Or it could have to do with valuing the information that will be lost if they go extinct. I don’t know how to disentangle those things.
We are trying to find out what kinds of things are to be done. Ethical intuitions are some of the observations about what things should be done, but the theory needs to describe what things should be done, not (just) explain ethical intuitions. This is a point where e.g. knowledge about scope insensitivity trumps the raw response of ethical intuition (you acknowledge the problem in the post, but without shifting the focus from observation to the observed).
So our ethical intuitions are scope insensitive. But how do we know how to correct for the insensitivity? Maybe maybe the value of an action increases linearly as it increases in scope. Maybe it increases exponentially. Maybe the right thing to do is average the utility per person. What possible evidence is there to answer this question? For that matter, why should we think that ethics shouldn’t be scope insensitive?
See Circular Altruism.
I just reread it but I don’t see how it answers the question. Can I impose on you to spell it out for me?
The idea is that when humans are confronted with ethical choices, they often have qualitative feelings (murder is bad, saving lives is good) upon which they try and make their ethical decision. This seems more natural, whereas applying mathematics to ethics seems somewhat repugnant. From Eliezer’s post:
My understanding of the gist of the post is that Eliezer argues that if he can demonstrate that the “refusal to multiply” leads to inconsistent ethical choices, then you must admit you should get over your emotional reluctance to apply quantitative reasoning, in order to make sure you would choose the correct ethical choice.
The scope insensitivity comes from the fact that we consider it extremely ethical to save a child, but not really twice as ethical to save two children… you can see this clearly in action movies where you’re supposed to be so thrilled that one particular child was saved when you just saw a whole train, presumably with many families as well as those nearly ethically worthless single men, fall into an abyss.
I’ll add my thoughts regarding the question, “why should we think that ethics shouldn’t be scope insensitive?” separately:
“Circular Altriusm” suggests that we shouldn’t expect that ethical intuition is scope sensitive. But should we expect that ethics is scope sensitive?
I think that ‘multiplication’ might be good—and pragmatically required—for deciding what we should do. However, this doesn’t guarantee that it is “ethical” to do so (this distinction only coherent for some subset of definitions of ethical, obviously). Ethics might require that you “never compromise”, in which case ethics would just provide an ideal, unobtainable arrow for how the world should be, and then it is up to us to decide how to pragmatically get the world there, or closer to there.
Yes, this is one of the big questions. In Eliezer’s scope insensitivity post, he implied that goodness is proportional to absolute number. But Eliezer has also said that he is an average utilitarian. So actually he should see what he called “scope insensitivity” as pretty near the right thing to do. If someone has no idea how many migrating birds there are, and you ask them, “How much would you pay to save 2000 birds?”, they’re going to seize on 2000 as a cue to indicate how many migrating birds there are. It might seem perfectly reasonable to such a person to suppose there are about 10,000 migrating birds total. But if you asked them, “How much would you pay to save 200,000 migrating birds?” they might suppose there are about 1,000,000 migrating birds. If they value their results by the average good done per bird existing, it would be correct for them to be willing to pay the same amount in both cases.
So what you’re asking amounts to asking questions like whether you want to maximize average utility, total utility, or maximum utility.
Average utilitarianism over the entirety of Reality looks mostly like aggregative utilitarianism locally.
I don’t think that most people average over all reality. We treated the last 3,000 wolves in North America with much more consideration than we treated the initial 30,000,000 wolves in North America. It’s as if we have a budget per-species; as if we partition reality before applying our utility function.
(People like to partition their utility function, because they don’t like to confront questions like, “How many beavers are worth a human’s life?”, or, “How many lattes would you give up to feed a person in Haiti for a week?”)
Treating rare animals as especially valuable could be a result of having a (partitioned) utility function that discounts large numbers of individuals. Or it could have to do with valuing the information that will be lost if they go extinct. I don’t know how to disentangle those things.
This is an interesting question.