If you flip the Rachels-Temkin spectrum argument (philpapers.org/archive/NEBTGT.pdf), then some tradeoff between happiness and suffering is needed to keep transitive preferences, which is necessary to avoid weird conclusions like accepting suffering to avoid happiness. As long as you don’t think theres some suffering threshold where 1 more util of suffering is infinitely worse than anything else, then this makes sense.
Also NU in general has a bad reputation in the philosophy community (more than classical utilitarianism I think) so it’s better EAs don’t endorse it.
If you flip the Rachels-Temkin spectrum argument (philpapers.org/archive/NEBTGT.pdf), then some tradeoff between happiness and suffering is needed to keep transitive preferences, which is necessary to avoid weird conclusions like accepting suffering to avoid happiness. As long as you don’t think theres some suffering threshold where 1 more util of suffering is infinitely worse than anything else, then this makes sense.
Can you give a practical example of a situation where I would be hereby forced to admit that happiness has terminal value above its instrumental value for my preventing as many suffering moments as I can?
I don’t see why {resolving conflicts by weighing everything (ultimately) in suffering} would ever lead me to {“accept suffering to avoid happiness”}, if happiness already can be weighed against suffering in terms of its suffering-preventing effects—just not by itself, which is what many other utilitarianisms rely on, inviting grotesque problems like doctors having parties so great that they outweigh the untreated suffering of their patients.
Are there also practical situations where I’d want to admit that paperclips have terminal value, or else accept suffering to avoid paperclips?
I don’t see what hidden assumptions I’m missing here. I certainly don’t think an infinitely large paperclip is an acceptable comparand to outweigh any kind of suffering. In the case of happiness, it depends completely on whether the combined causal cascades from this happiness are expected to prevent more suffering than the current comparand suffering: no need to attach any independent numerical terminal value to happiness itself, or we’d be back to counting happy sheep believing it to outweigh someone’s agony any moment now.
Also NU in general has a bad reputation in the philosophy community (more than classical utilitarianism I think) so it’s better EAs don’t endorse it.
I believe the first part of this statement may currently be true for the WEIRD (western, educated, industrialized, rich, democratic) philosophy community. Other parts of the world have long histories and living traditions of suffering-based views, primarily various forms of Buddhism. In what I’ve read about Mahayana Buddhism (or the Bodhisattva path), compassion is often explicitly identified as the only necessary motivation that implies and/or transcends all the outwardly visible customs, rules, and ethics, and that compassion is the voice to listen to when other “absolutes” conflict. (Omnicidal superweapon research is not part of these philosophies of compassion, but invented, in my estimation, as an implication of NU by later armchair rationalists to easily dismiss NU.)
I’ll take the second part of your statement as your current personal opinion of NU in its present form and perceived reputation. I am personally still optimistic that suffering is the most universal candidate to derive all other values from, and I would be careful not to alienate a large segment of systematic altruists such as might be found among secular, rationalist Buddhists. I mostly agree though, that NU in its present form may be tainted by the prevalence of the world-destruction argument (even though it is argued to represent only a straw man NU by proponents of NU).
If you flip the Rachels-Temkin spectrum argument (philpapers.org/archive/NEBTGT.pdf), then some tradeoff between happiness and suffering is needed to keep transitive preferences, which is necessary to avoid weird conclusions like accepting suffering to avoid happiness. As long as you don’t think theres some suffering threshold where 1 more util of suffering is infinitely worse than anything else, then this makes sense.
Also NU in general has a bad reputation in the philosophy community (more than classical utilitarianism I think) so it’s better EAs don’t endorse it.
Can you give a practical example of a situation where I would be hereby forced to admit that happiness has terminal value above its instrumental value for my preventing as many suffering moments as I can?
I don’t see why {resolving conflicts by weighing everything (ultimately) in suffering} would ever lead me to {“accept suffering to avoid happiness”}, if happiness already can be weighed against suffering in terms of its suffering-preventing effects—just not by itself, which is what many other utilitarianisms rely on, inviting grotesque problems like doctors having parties so great that they outweigh the untreated suffering of their patients.
Are there also practical situations where I’d want to admit that paperclips have terminal value, or else accept suffering to avoid paperclips?
I don’t see what hidden assumptions I’m missing here. I certainly don’t think an infinitely large paperclip is an acceptable comparand to outweigh any kind of suffering. In the case of happiness, it depends completely on whether the combined causal cascades from this happiness are expected to prevent more suffering than the current comparand suffering: no need to attach any independent numerical terminal value to happiness itself, or we’d be back to counting happy sheep believing it to outweigh someone’s agony any moment now.
I believe the first part of this statement may currently be true for the WEIRD (western, educated, industrialized, rich, democratic) philosophy community. Other parts of the world have long histories and living traditions of suffering-based views, primarily various forms of Buddhism. In what I’ve read about Mahayana Buddhism (or the Bodhisattva path), compassion is often explicitly identified as the only necessary motivation that implies and/or transcends all the outwardly visible customs, rules, and ethics, and that compassion is the voice to listen to when other “absolutes” conflict. (Omnicidal superweapon research is not part of these philosophies of compassion, but invented, in my estimation, as an implication of NU by later armchair rationalists to easily dismiss NU.)
I’ll take the second part of your statement as your current personal opinion of NU in its present form and perceived reputation. I am personally still optimistic that suffering is the most universal candidate to derive all other values from, and I would be careful not to alienate a large segment of systematic altruists such as might be found among secular, rationalist Buddhists. I mostly agree though, that NU in its present form may be tainted by the prevalence of the world-destruction argument (even though it is argued to represent only a straw man NU by proponents of NU).