That is generally a simple conclusion of “category A contains bad stuff, and scales (you can make it better or worse)” and “category B contains bad stuff, and scales”, then it’s not surprising that you can find something in A worse than something in B (except if you use aberrations like lexicographical ordering). It’s like the 3^^^3 dust specks/stubbed toes over again…
Good point. One way out of the counter-intuitiveness—which hasn’t gone away for me, despite your explanation—is to deny that “it scales.” I.e., deny that the badness of creating a just-barely-worthwhile life, into a world containing many good lives, scales with the number of just-barely-worthwhile lives. Something approaching a maximin view—the idea that in population ethics, there’s a fundamental component of value that depends on how the worst-off person fares—while I wouldn’t agree with it, doesn’t seem so implausible. And I think it would get you many of the conclusions that you’re after.
That is generally a simple conclusion of “category A contains bad stuff, and scales (you can make it better or worse)” and “category B contains bad stuff, and scales”, then it’s not surprising that you can find something in A worse than something in B (except if you use aberrations like lexicographical ordering). It’s like the 3^^^3 dust specks/stubbed toes over again…
Good point. One way out of the counter-intuitiveness—which hasn’t gone away for me, despite your explanation—is to deny that “it scales.” I.e., deny that the badness of creating a just-barely-worthwhile life, into a world containing many good lives, scales with the number of just-barely-worthwhile lives. Something approaching a maximin view—the idea that in population ethics, there’s a fundamental component of value that depends on how the worst-off person fares—while I wouldn’t agree with it, doesn’t seem so implausible. And I think it would get you many of the conclusions that you’re after.