The idea is that when humans are confronted with ethical choices, they often have qualitative feelings (murder is bad, saving lives is good) upon which they try and make their ethical decision. This seems more natural, whereas applying mathematics to ethics seems somewhat repugnant. From Eliezer’s post:
My favorite anecdote along these lines—though my books are packed at the moment, so no citation for now—comes from a team of researchers who evaluated the effectiveness of a certain project, calculating the cost per life saved, and recommended to the government that the project be implemented because it was cost-effective. The governmental agency rejected the report because, they said, you couldn’t put a dollar value on human life. After rejecting the report, the agency decided not to implement the measure.
My understanding of the gist of the post is that Eliezer argues that if he can demonstrate that the “refusal to multiply” leads to inconsistent ethical choices, then you must admit you should get over your emotional reluctance to apply quantitative reasoning, in order to make sure you would choose the correct ethical choice.
The scope insensitivity comes from the fact that we consider it extremely ethical to save a child, but not really twice as ethical to save two children… you can see this clearly in action movies where you’re supposed to be so thrilled that one particular child was saved when you just saw a whole train, presumably with many families as well as those nearly ethically worthless single men, fall into an abyss.
I’ll add my thoughts regarding the question, “why should we think that ethics shouldn’t be scope insensitive?” separately:
“Circular Altriusm” suggests that we shouldn’t expect that ethical intuition is scope sensitive. But should we expect that ethics is scope sensitive?
I think that ‘multiplication’ might be good—and pragmatically required—for deciding what we should do. However, this doesn’t guarantee that it is “ethical” to do so (this distinction only coherent for some subset of definitions of ethical, obviously). Ethics might require that you “never compromise”, in which case ethics would just provide an ideal, unobtainable arrow for how the world should be, and then it is up to us to decide how to pragmatically get the world there, or closer to there.
See Circular Altruism.
I just reread it but I don’t see how it answers the question. Can I impose on you to spell it out for me?
The idea is that when humans are confronted with ethical choices, they often have qualitative feelings (murder is bad, saving lives is good) upon which they try and make their ethical decision. This seems more natural, whereas applying mathematics to ethics seems somewhat repugnant. From Eliezer’s post:
My understanding of the gist of the post is that Eliezer argues that if he can demonstrate that the “refusal to multiply” leads to inconsistent ethical choices, then you must admit you should get over your emotional reluctance to apply quantitative reasoning, in order to make sure you would choose the correct ethical choice.
The scope insensitivity comes from the fact that we consider it extremely ethical to save a child, but not really twice as ethical to save two children… you can see this clearly in action movies where you’re supposed to be so thrilled that one particular child was saved when you just saw a whole train, presumably with many families as well as those nearly ethically worthless single men, fall into an abyss.
I’ll add my thoughts regarding the question, “why should we think that ethics shouldn’t be scope insensitive?” separately:
“Circular Altriusm” suggests that we shouldn’t expect that ethical intuition is scope sensitive. But should we expect that ethics is scope sensitive?
I think that ‘multiplication’ might be good—and pragmatically required—for deciding what we should do. However, this doesn’t guarantee that it is “ethical” to do so (this distinction only coherent for some subset of definitions of ethical, obviously). Ethics might require that you “never compromise”, in which case ethics would just provide an ideal, unobtainable arrow for how the world should be, and then it is up to us to decide how to pragmatically get the world there, or closer to there.