It wasn’t a rhetorical question; I really wanted (and still want) to know your answer.
Thanks for clarifying. NU certainly sounds a rather bleak ethic. But NUs want us all to have fabulously rich, wonderful, joyful lives—just not at the price of anyone else’s suffering. NUs would “walk away from Omelas”. Reading JDP’s post, one might be forgiven for thinking that the biggest x-risk was from NUs. However, later this century and beyond, if (1) “omnicide” is technically feasible, and if (2) suffering persists, then there are intelligent agents who would bring the world to an end to get rid of it. You would end the world too rather than undergo some kinds of suffering. By contrast, genetically engineering a world without suffering, just fanatical life-lovers, will be safer for the future of sentience—even if you think the biggest threat to humanity comes from rogue AGI/paperclip-maximizers.
Thanks for answering. FWIW I’m totally in favor of genetically engineering a world without suffering, in case that wasn’t clear. Suffering is bad.
But NUs want us all to have fabulously rich, wonderful, joyful lives—just not at price of anyone else’s suffering. NUs would “walk away from Omelas”.
Quantitatively, given a choice between a tiny amount of suffering X + everyone and everything else being great, or everyone dying, NU’s would choose omnicide no matter how small X is? Or is there an amount of suffering X such that NU’s would accept it as the unfortunate price to pay rather than “walk away.” (“walk away” being a euphemism for “kill everyone?” In the Omelas story, walking away doesn’t actually help prevent any suffering. Working to destroy Omelas would, at least in the long run, depending on how painless the destruction was.)
A separate but related question: What if we also make it so that X doesn’t happen for sure, but rather happens with some probability. How low does that probability have to be before NUs would take the risk, instead of choosing omnicide? Is any probability too low?
It’s good to know we agree on genetically phasing out the biology of suffering! Now for your thought-experiments.
Quantitatively, given a choice between a tiny amount of suffering X + everyone and everything else being great, or everyone dying, NU’s would choose omnicide no matter how small X is?
To avoid status quo bias, imagine you are offered the chance to create a type-identical duplicate, New Omelas—again a blissful city of vast delights dependent on the torment of a single child. Would you accept or decline? As an NU, I’d say “no”—even though the child’s suffering is “trivial” compared to the immensity of pleasure to be gained. Likewise, I’d painlessly retire the original Omelas too. Needless to say, our existing world is a long way from Omelas. Indeed, if we include nonhuman animals, then our world may contain more suffering than happiness. Most nonhuman animals in Nature starve to death at a early age; and factory-farmed nonhumans suffer chronic distress. Maybe the CU should press a notional OFF button and retire life too.
A separate but related question: What if we also make it so that X doesn’t happen for sure, but rather happens with some probability. How low does that probability have to be before NUs would take the risk, instead of choosing omnicide? Is any probability too low?
You pose an interesting hypothetical that I’d never previously considered. If I could be 100% certain that NU is ethically correct, then the slightest risk of even trivial amounts of suffering is too high. However, prudence dictates epistemic humility. So I’d need to think some more before answering.
Back in the real world, I believe (on consequentialist NU grounds) that it’s best to enshrine in law the sanctity of human and nonhuman animal life. And (like you) I look forward to the day when we can get rid of suffering—and maybe forget NU ever existed.
It wasn’t a rhetorical question; I really wanted (and still want) to know your answer.
(My answer to your question is yes, fwiw)
Thanks for clarifying. NU certainly sounds a rather bleak ethic. But NUs want us all to have fabulously rich, wonderful, joyful lives—just not at the price of anyone else’s suffering. NUs would “walk away from Omelas”. Reading JDP’s post, one might be forgiven for thinking that the biggest x-risk was from NUs. However, later this century and beyond, if (1) “omnicide” is technically feasible, and if (2) suffering persists, then there are intelligent agents who would bring the world to an end to get rid of it. You would end the world too rather than undergo some kinds of suffering. By contrast, genetically engineering a world without suffering, just fanatical life-lovers, will be safer for the future of sentience—even if you think the biggest threat to humanity comes from rogue AGI/paperclip-maximizers.
Thanks for answering. FWIW I’m totally in favor of genetically engineering a world without suffering, in case that wasn’t clear. Suffering is bad.
Quantitatively, given a choice between a tiny amount of suffering X + everyone and everything else being great, or everyone dying, NU’s would choose omnicide no matter how small X is? Or is there an amount of suffering X such that NU’s would accept it as the unfortunate price to pay rather than “walk away.” (“walk away” being a euphemism for “kill everyone?” In the Omelas story, walking away doesn’t actually help prevent any suffering. Working to destroy Omelas would, at least in the long run, depending on how painless the destruction was.)
A separate but related question: What if we also make it so that X doesn’t happen for sure, but rather happens with some probability. How low does that probability have to be before NUs would take the risk, instead of choosing omnicide? Is any probability too low?
It’s good to know we agree on genetically phasing out the biology of suffering!
Now for your thought-experiments.
To avoid status quo bias, imagine you are offered the chance to create a type-identical duplicate, New Omelas—again a blissful city of vast delights dependent on the torment of a single child. Would you accept or decline? As an NU, I’d say “no”—even though the child’s suffering is “trivial” compared to the immensity of pleasure to be gained. Likewise, I’d painlessly retire the original Omelas too. Needless to say, our existing world is a long way from Omelas. Indeed, if we include nonhuman animals, then our world may contain more suffering than happiness. Most nonhuman animals in Nature starve to death at a early age; and factory-farmed nonhumans suffer chronic distress. Maybe the CU should press a notional OFF button and retire life too.
You pose an interesting hypothetical that I’d never previously considered. If I could be 100% certain that NU is ethically correct, then the slightest risk of even trivial amounts of suffering is too high. However, prudence dictates epistemic humility. So I’d need to think some more before answering.
Back in the real world, I believe (on consequentialist NU grounds) that it’s best to enshrine in law the sanctity of human and nonhuman animal life. And (like you) I look forward to the day when we can get rid of suffering—and maybe forget NU ever existed.
Thanks for the clarification!