Don’t have as much time as I would like, but short and (not particularly) sweet:
I think there is a mix up between evaluative and normative concerns here. We could say that the repugnant conclusion world is evaluated better than the current world, but some fact about how we get there (via benign addition or similar) is normatively unacceptable. But even then that seems a big bullet to bite—most of us think the RC is worse than a smaller population with high happiness (even if lower aggregate), not that it is better but it would be immoral for us to get there.
Another way of parsing your remarks is to say that when the ‘levelling’ option is available to us, benign addition is no longer better than leaving things as they are by person-affecting lights. So B < A, and if we know we can move from A+ --> B, A+ < A as well. This has the unfortunate side-effect of violating irrelevance of independent alternatives (if only A and A+ are on offer, we should say A+ > A, but once we introduce B, A > A+). Maybe that isn’t too big a bullet to bite, but (lexically prior) person affecting restrictions tend to lead to funky problems where we rule out seemingly great deals (e.g. a trillion blissful lives for the cost of a pinprick). That said, everything in population ethics has nasty conclusions...
However, I don’t buy the idea that we can rule out benign addition because the addition of moral obligation harms someone independent of the drop in utility they take for fulfilling it. It seems plausible that a fulfilled moral obligation makes the world a better place. There seem to be weird consequences if you take this to be lexically prior to other concerns for benign addition: on the face of it, this suggests we should say it is wrong for people in the developing world to have children (as they impose further obligations on affluent westerners), or indeed, depending on the redistribution ethic you take, everyone who isn’t the most well-off person. Even if you don’t and say it is outweighed by other concerns, this still seems to be misdiagnosing what should be morally salient here—there isn’t even a pro tanto concern for poor parents not to have children because they’d impose further obligations on richer folks to help.
I think there is a mix up between evaluative and normative concerns here.
That’s right, my new argument doesn’t avoid the RC for questions like “if two populations were to spontaneously appear at exactly the same time which would be better?”
Another way of parsing your remarks is to say that when the ‘levelling’ option is available to us, benign addition is no longer better than leaving things as they are by person-affecting lights. So B < A, and if we know we can move from A+ --> B, A+ < A as well.
What I’m actually arguing is that A+ is [person-affecting] worse than A, even when B is unavailable. This is due to following the axiom of transitivity backwards instead of forwards. If A>B and A+<B then A+<A. If I was to give a more concrete reason for why A+<A I would say that the fact that the A people are unaware that the + people exist is irrelevant, they are still harmed. This is not without precedent in ethics, most people think that a person who has an affair harms their spouse, even if their spouse never finds out.
However, after reading this essay by Eliezer, (after I wrote my November 8th comment) I am beginning to think the intransitivity that the person affecting view seems to create in the Benign Addition Paradox is an illusion. “Is B better than A?” and “Is B better than A+” are not the same question if you adopt a person affecting view, because the persons being affected are different in each question. If you ask two different questions you shouldn’t expect transitive answers.
There seem to be weird consequences if you take this to be lexically prior to other concerns for benign addition: on the face of it, this suggests we should say it is wrong for people in the developing world to have children (as they impose further obligations on affluent westerners
I know, the example I gave was that we all might be harming unimaginably affluent aliens by reproducing. I think you are right that even taking the objections to it that I gave into account, it’s a pretty weird conclusion.
there isn’t even a pro tanto concern for poor parents not to have children because they’d impose further obligations on richer folks to help.
I don’t know, I’ve heard people complain about poor people reproducing and increasing the burden on the welfare system before. Most of the time I find these complainers repulsive, I think their complaints are motivated by ugly, mean-spirited snobbery and status signalling, rather than genuine ethical concerns. But I suspect that a tiny minority of the complainers might have been complaining out of genuine concern that they were being harmed.
Maybe that isn’t too big a bullet to bite, but (lexically prior) person affecting restrictions tend to lead to funky problems where we rule out seemingly great deals (e.g. a trillion blissful lives for the cost of a pinprick).
Again, I agree. My main point in making this argument was to try to demonstrate that a pure person-affecting viewpoint could be saved from the benign addition paradox. I think that even if I succeeded in that, the other weird conclusions I drew (i.e., we might be hurting super-rich aliens by reproducing) demonstrate that a pure person-affecting view is not morally tenable. I suspect the best solution might be to develop some pluralist synthesis of person-affecting and objective views.
It seems weird to say A+ < A on a person affecting view even when B is unavailable, in virtue of the fact that A now labours under an (unknown to them, and impossible to fulfil) moral obligation to improve the lives of the additional persons. Why stop there? We seem to suffer infinite harm by failing to bring into existence people we stipulate have positive lives but necessarily cannot exist. The fact (unknown to them, impossible to fulfil) obligations are non local also leads to alien-y reductios. Further, we generally do not want to say impossible to fulfil obligations really obtain, and furthermore that being subject to them harms thus—why believe that?
Intransitivity
I didn’t find the Eliezer essay enlightening, but it is orthodox to say that evaluation should have transitive answers (“is A better than A+, is B better than A+?”), and most person affecting views have big problems with transitivity: consider this example.
World 1: A = 2, B = 1
World 2: B = 2, C = 1
World 3: C = 2, A = 1
By a simple person affecting view, W1>W2, W2>W3, W3>W1. So we have an intransitive cycle. (There are attempts to dodge this via comparative harm views etc., but ignore that).
One way person affecting views can not have normative intransitivity (which seems really bad) is to give normative principles that set how you pick available worlds. So once you are in a given world (say A), you can say that no option is acceptable that leads to anyone in that world ending up worse off. So once one knows there is a path to B via A+, taking the first step to A+ is unacceptable, but it would be okay if no A+ to B option was available. This violates irrelevance of independent alternatives and leads to path dependency, but that isn’t such a big bullet to bite (you retain within-choice ordering).
Synthesis
I doubt there is going to be any available synthesis between person affecting and total views that will get out of trouble. One can get RC so long as the ‘total term’ has some weight (ie. not lexically inferior to) person-affecting wellbeing, because we can just offer massive increases in impersonal welfare that outweigh the person affecting harm. Conversely, we can keep intransitivity and other costly consequences with a mixed (non-lexically prior) view—indeed, we can downwardly dutch book someone by picking our people with care to get pairwise comparisons, eg.
Even if you almost entirely value impersonal harm and put a tiny weight on person affecting harm, we can make sure we only reduce total welfare very slightly between each world so it can be made up for by person affecting benefit. It seems the worst of both worlds. I find accepting the total view (and the RC) the best out.
most person affecting views have big problems with transitivity
That is because I don’t think the person affecting view asks the same question each time (that was the point of Eliezer’s essay). The person-affecting view doesn’t ask “Which society is better, in some abstract sense?” It asks “Does transitioning from one society to the other harm the collective self-interest of the people in the original society?” That’s obviously going to result in intransitivity.
I doubt there is going to be any available synthesis between person affecting and total views that will get out of trouble.....Conversely, we can keep intransitivity and other costly consequences with a mixed (non-lexically prior) view—indeed, we can downwardly dutch book someone by picking our people with care to get pairwise comparisons, eg.
I think I might have been conflating the “person affecting view” with the “prior existence” view. The prior existence view, from what I understand, takes the interests of future people into account, but reserves present people the right to veto their existence if it seriously harms their current interest. So it is immoral for existing people to create someone with low utility and then refuse to help or share with them because it would harm their self-interest, but it is moral [at least in most cases] for them to refuse to create someone whose existence harms their self-interest.
Basically, I find it unacceptable for ethics to conclude something like “It is a net moral good to kill a person destined to live a very worthwhile life and replace them with another person destined to live a slightly more worthwhile life.” This seems obviously immoral to me. It seems obvious that a world where that person is never killed and lives their life is better than one where they were killed and replaced (although one where they were never born and the person with the better life was born instead would obviously be best of all).
On the other hand, as you pointed out before, it seems trivially right to give one existing person a pinprick on the finger in order to create a trillion blissful lives who do not harm existing people in any other way.
I think the best way to reconcile these two intuitions is to develop a pluralist system where prior-existence concerns have much, much, much larger weight than total concerns, but not infinitely large weight. In more concrete terms, it’s wrong to kill someone and replace them with one slightly better off person, but it could be right to kill someone and replace them with a quadrillion people who lead blissful lives.
This doesn’t completely avoid the RC of course. But I think that I can accept that. The thing I found particularly repugnant about the RC is that a RC-type world is the best practicable world, ie, the best possible world that can ever be created given the various constraints its inhabitants face. That’s what I want to avoid, and I think the various pluralist ideas I’ve introduced successfully do so.
You are right to point out that my pluralist ideas do not avoid the RC for a sufficiently huge world. However, I can accept that. As long as an RC world is never the one we should be aiming for I think I can accept it.
Don’t have as much time as I would like, but short and (not particularly) sweet:
I think there is a mix up between evaluative and normative concerns here. We could say that the repugnant conclusion world is evaluated better than the current world, but some fact about how we get there (via benign addition or similar) is normatively unacceptable. But even then that seems a big bullet to bite—most of us think the RC is worse than a smaller population with high happiness (even if lower aggregate), not that it is better but it would be immoral for us to get there.
Another way of parsing your remarks is to say that when the ‘levelling’ option is available to us, benign addition is no longer better than leaving things as they are by person-affecting lights. So B < A, and if we know we can move from A+ --> B, A+ < A as well. This has the unfortunate side-effect of violating irrelevance of independent alternatives (if only A and A+ are on offer, we should say A+ > A, but once we introduce B, A > A+). Maybe that isn’t too big a bullet to bite, but (lexically prior) person affecting restrictions tend to lead to funky problems where we rule out seemingly great deals (e.g. a trillion blissful lives for the cost of a pinprick). That said, everything in population ethics has nasty conclusions...
However, I don’t buy the idea that we can rule out benign addition because the addition of moral obligation harms someone independent of the drop in utility they take for fulfilling it. It seems plausible that a fulfilled moral obligation makes the world a better place. There seem to be weird consequences if you take this to be lexically prior to other concerns for benign addition: on the face of it, this suggests we should say it is wrong for people in the developing world to have children (as they impose further obligations on affluent westerners), or indeed, depending on the redistribution ethic you take, everyone who isn’t the most well-off person. Even if you don’t and say it is outweighed by other concerns, this still seems to be misdiagnosing what should be morally salient here—there isn’t even a pro tanto concern for poor parents not to have children because they’d impose further obligations on richer folks to help.
That’s right, my new argument doesn’t avoid the RC for questions like “if two populations were to spontaneously appear at exactly the same time which would be better?”
What I’m actually arguing is that A+ is [person-affecting] worse than A, even when B is unavailable. This is due to following the axiom of transitivity backwards instead of forwards. If A>B and A+<B then A+<A. If I was to give a more concrete reason for why A+<A I would say that the fact that the A people are unaware that the + people exist is irrelevant, they are still harmed. This is not without precedent in ethics, most people think that a person who has an affair harms their spouse, even if their spouse never finds out.
However, after reading this essay by Eliezer, (after I wrote my November 8th comment) I am beginning to think the intransitivity that the person affecting view seems to create in the Benign Addition Paradox is an illusion. “Is B better than A?” and “Is B better than A+” are not the same question if you adopt a person affecting view, because the persons being affected are different in each question. If you ask two different questions you shouldn’t expect transitive answers.
I know, the example I gave was that we all might be harming unimaginably affluent aliens by reproducing. I think you are right that even taking the objections to it that I gave into account, it’s a pretty weird conclusion.
I don’t know, I’ve heard people complain about poor people reproducing and increasing the burden on the welfare system before. Most of the time I find these complainers repulsive, I think their complaints are motivated by ugly, mean-spirited snobbery and status signalling, rather than genuine ethical concerns. But I suspect that a tiny minority of the complainers might have been complaining out of genuine concern that they were being harmed.
Again, I agree. My main point in making this argument was to try to demonstrate that a pure person-affecting viewpoint could be saved from the benign addition paradox. I think that even if I succeeded in that, the other weird conclusions I drew (i.e., we might be hurting super-rich aliens by reproducing) demonstrate that a pure person-affecting view is not morally tenable. I suspect the best solution might be to develop some pluralist synthesis of person-affecting and objective views.
It seems weird to say A+ < A on a person affecting view even when B is unavailable, in virtue of the fact that A now labours under an (unknown to them, and impossible to fulfil) moral obligation to improve the lives of the additional persons. Why stop there? We seem to suffer infinite harm by failing to bring into existence people we stipulate have positive lives but necessarily cannot exist. The fact (unknown to them, impossible to fulfil) obligations are non local also leads to alien-y reductios. Further, we generally do not want to say impossible to fulfil obligations really obtain, and furthermore that being subject to them harms thus—why believe that?
Intransitivity
I didn’t find the Eliezer essay enlightening, but it is orthodox to say that evaluation should have transitive answers (“is A better than A+, is B better than A+?”), and most person affecting views have big problems with transitivity: consider this example.
World 1: A = 2, B = 1 World 2: B = 2, C = 1 World 3: C = 2, A = 1
By a simple person affecting view, W1>W2, W2>W3, W3>W1. So we have an intransitive cycle. (There are attempts to dodge this via comparative harm views etc., but ignore that).
One way person affecting views can not have normative intransitivity (which seems really bad) is to give normative principles that set how you pick available worlds. So once you are in a given world (say A), you can say that no option is acceptable that leads to anyone in that world ending up worse off. So once one knows there is a path to B via A+, taking the first step to A+ is unacceptable, but it would be okay if no A+ to B option was available. This violates irrelevance of independent alternatives and leads to path dependency, but that isn’t such a big bullet to bite (you retain within-choice ordering).
Synthesis
I doubt there is going to be any available synthesis between person affecting and total views that will get out of trouble. One can get RC so long as the ‘total term’ has some weight (ie. not lexically inferior to) person-affecting wellbeing, because we can just offer massive increases in impersonal welfare that outweigh the person affecting harm. Conversely, we can keep intransitivity and other costly consequences with a mixed (non-lexically prior) view—indeed, we can downwardly dutch book someone by picking our people with care to get pairwise comparisons, eg.
W1: A=10, B=5 W2: B=6, C=2 W3: C=3, A=1 W4: A=2, B=1
Even if you almost entirely value impersonal harm and put a tiny weight on person affecting harm, we can make sure we only reduce total welfare very slightly between each world so it can be made up for by person affecting benefit. It seems the worst of both worlds. I find accepting the total view (and the RC) the best out.
That is because I don’t think the person affecting view asks the same question each time (that was the point of Eliezer’s essay). The person-affecting view doesn’t ask “Which society is better, in some abstract sense?” It asks “Does transitioning from one society to the other harm the collective self-interest of the people in the original society?” That’s obviously going to result in intransitivity.
I think I might have been conflating the “person affecting view” with the “prior existence” view. The prior existence view, from what I understand, takes the interests of future people into account, but reserves present people the right to veto their existence if it seriously harms their current interest. So it is immoral for existing people to create someone with low utility and then refuse to help or share with them because it would harm their self-interest, but it is moral [at least in most cases] for them to refuse to create someone whose existence harms their self-interest.
Basically, I find it unacceptable for ethics to conclude something like “It is a net moral good to kill a person destined to live a very worthwhile life and replace them with another person destined to live a slightly more worthwhile life.” This seems obviously immoral to me. It seems obvious that a world where that person is never killed and lives their life is better than one where they were killed and replaced (although one where they were never born and the person with the better life was born instead would obviously be best of all).
On the other hand, as you pointed out before, it seems trivially right to give one existing person a pinprick on the finger in order to create a trillion blissful lives who do not harm existing people in any other way.
I think the best way to reconcile these two intuitions is to develop a pluralist system where prior-existence concerns have much, much, much larger weight than total concerns, but not infinitely large weight. In more concrete terms, it’s wrong to kill someone and replace them with one slightly better off person, but it could be right to kill someone and replace them with a quadrillion people who lead blissful lives.
This doesn’t completely avoid the RC of course. But I think that I can accept that. The thing I found particularly repugnant about the RC is that a RC-type world is the best practicable world, ie, the best possible world that can ever be created given the various constraints its inhabitants face. That’s what I want to avoid, and I think the various pluralist ideas I’ve introduced successfully do so.
You are right to point out that my pluralist ideas do not avoid the RC for a sufficiently huge world. However, I can accept that. As long as an RC world is never the one we should be aiming for I think I can accept it.