For most systems of population ethics the sadistic conclusion can thus be reduced to “creating underclasses with slightly worthwhile lives can sometimes be bad.”
To flesh it out further though, it says “creating many slightly worthwhile lives can sometimes be worse than creating one slightly worse-than-nothing life”. Which may not really deserve the label “sadistic”, but still strikes me as highly counter-intuitive.
That is generally a simple conclusion of “category A contains bad stuff, and scales (you can make it better or worse)” and “category B contains bad stuff, and scales”, then it’s not surprising that you can find something in A worse than something in B (except if you use aberrations like lexicographical ordering). It’s like the 3^^^3 dust specks/stubbed toes over again…
Good point. One way out of the counter-intuitiveness—which hasn’t gone away for me, despite your explanation—is to deny that “it scales.” I.e., deny that the badness of creating a just-barely-worthwhile life, into a world containing many good lives, scales with the number of just-barely-worthwhile lives. Something approaching a maximin view—the idea that in population ethics, there’s a fundamental component of value that depends on how the worst-off person fares—while I wouldn’t agree with it, doesn’t seem so implausible. And I think it would get you many of the conclusions that you’re after.
To flesh it out further though, it says “creating many slightly worthwhile lives can sometimes be worse than creating one slightly worse-than-nothing life”. Which may not really deserve the label “sadistic”, but still strikes me as highly counter-intuitive.
That is generally a simple conclusion of “category A contains bad stuff, and scales (you can make it better or worse)” and “category B contains bad stuff, and scales”, then it’s not surprising that you can find something in A worse than something in B (except if you use aberrations like lexicographical ordering). It’s like the 3^^^3 dust specks/stubbed toes over again…
Good point. One way out of the counter-intuitiveness—which hasn’t gone away for me, despite your explanation—is to deny that “it scales.” I.e., deny that the badness of creating a just-barely-worthwhile life, into a world containing many good lives, scales with the number of just-barely-worthwhile lives. Something approaching a maximin view—the idea that in population ethics, there’s a fundamental component of value that depends on how the worst-off person fares—while I wouldn’t agree with it, doesn’t seem so implausible. And I think it would get you many of the conclusions that you’re after.