I, and I think most people in practice, use reflective equilibrium to decide what our ethics are. This means that we can notice that our ethical intuitions are insensitive to scope, but also that upon reflection it seems like this is wrong, and thus adopt an ethics different from that given by our naive intuition.
When we’re trying to use logic to decide whether to accept an ethical conclusion counter to our intuition, it’s no good to document what our intuition currently says as if that settles the matter.
A priori, 1,000 lives at risk may seem just as urgent as 10,000. But we think about it, and we do our best to override it.
And in fact, I fail pretty hard at it. I’m pretty sure the amount I give to charity wouldn’t be different in a world where the effectiveness of the best causes were an order of magnitude different. I suspect this is true of many; certainly anyone following the Giving What We Can pledge is using an ancient Schelling Point rather than any kind of calculation. But that doesn’t mean you can convince me that my “real” ethics doesn’t care how many lives are saved.
When we talk about weird hypotheticals like Pascallian deals, we aren’t trying to figure out what our intuition says; we’re trying to figure out whether we should overrule it.
When you use philosophical reflection to override naive intuition, you should have explicit reasons for doing so. A reason for valuing 10,000 lives 10 times as much as 1,000 lives is that both of these are tiny compared to the total number of lives, so if you valued them at a different ratio, this would imply an oddly sharp bend in utility as a function of lives, and we can tell that there is no such bend because if we imagine that there were a few thousand more or fewer people on the planet, our intuitions about that that particular tradeoff would not change. This reasoning does not apply to decisions affecting astronomically large numbers of lives, and I have not seen any reasoning that does which I find compelling.
It is also not true that people are trying to figure out whether to overrule their intuition when they talk about Pascal’s mugging; typically, they are trying to figure out how to justify not overruling their intuition. How else can you explain the preponderence of shaky “resolutions” to Pascal’s mugging that accept the Linear Utility Hypothesis and nonetheless conclude that you should not pay Pascal’s mugger, when “I tried to estimate the relevent probabilities fairly conservatively, multiplied probabilities times utilities, and paying Pascal’s mugger came out far ahead” is usually not considered a resolution?
I think I disagree with your approach here.
I, and I think most people in practice, use reflective equilibrium to decide what our ethics are. This means that we can notice that our ethical intuitions are insensitive to scope, but also that upon reflection it seems like this is wrong, and thus adopt an ethics different from that given by our naive intuition.
When we’re trying to use logic to decide whether to accept an ethical conclusion counter to our intuition, it’s no good to document what our intuition currently says as if that settles the matter.
A priori, 1,000 lives at risk may seem just as urgent as 10,000. But we think about it, and we do our best to override it.
And in fact, I fail pretty hard at it. I’m pretty sure the amount I give to charity wouldn’t be different in a world where the effectiveness of the best causes were an order of magnitude different. I suspect this is true of many; certainly anyone following the Giving What We Can pledge is using an ancient Schelling Point rather than any kind of calculation. But that doesn’t mean you can convince me that my “real” ethics doesn’t care how many lives are saved.
When we talk about weird hypotheticals like Pascallian deals, we aren’t trying to figure out what our intuition says; we’re trying to figure out whether we should overrule it.
When you use philosophical reflection to override naive intuition, you should have explicit reasons for doing so. A reason for valuing 10,000 lives 10 times as much as 1,000 lives is that both of these are tiny compared to the total number of lives, so if you valued them at a different ratio, this would imply an oddly sharp bend in utility as a function of lives, and we can tell that there is no such bend because if we imagine that there were a few thousand more or fewer people on the planet, our intuitions about that that particular tradeoff would not change. This reasoning does not apply to decisions affecting astronomically large numbers of lives, and I have not seen any reasoning that does which I find compelling.
It is also not true that people are trying to figure out whether to overrule their intuition when they talk about Pascal’s mugging; typically, they are trying to figure out how to justify not overruling their intuition. How else can you explain the preponderence of shaky “resolutions” to Pascal’s mugging that accept the Linear Utility Hypothesis and nonetheless conclude that you should not pay Pascal’s mugger, when “I tried to estimate the relevent probabilities fairly conservatively, multiplied probabilities times utilities, and paying Pascal’s mugger came out far ahead” is usually not considered a resolution?