Note (2022-03-15): For a recent defense of similar axiological views, see my series on minimalist axiologies.
(Crossposted on the EA Forum)
Absolute negative utilitarianism (ANU) is a minority view despite the theoretical advantages of terminal value monism (suffering is the only thing that motivates us “by itself”) over pluralism (there are many such things). Notably, ANU doesn’t require solving value incommensurability, because all other values can be instrumentally evaluated by their relationship to the suffering of sentient beings, using only one terminal value-grounded common currency for everything.
Therefore, it is a straw man argument that NUs don’t value life or positive states, because NUs value them instrumentally, which may translate into substantial practical efforts to protect them (compared even with someone who claims to be terminally motivated by them).
If the rationality and EA communities are looking for a unified theory of value, why are they not converging (more) on negative utilitarianism?
What have you read about it that has caused you to stop considering it, or to overlook it from the start?
Can you teach me how to see positive states as terminally (and not just instrumentally) valuable, if I currently don’t? (I still enjoy things, being closer to the extreme of hyperthymia than anhedonia. Am I platonically blind to the intrinsic aspect of positivity?)
And if someone wants to answer: What is the most extreme form of suffering that you’ve experienced and believe can be “outweighed” by positive experiences?
I find negative utilitarianism unappealing for roughly the same reason I’d find “we should only care about disgust” or “we should only care about the taste of bananas” unappealing. Or if you think suffering is much closer to a natural kind than disgust, then supply some other mental (or physical!) state that seems more natural-kind-ish to you.
“Only suffering ultimately matters” and “only the taste of bananas ultimately matters” share the virtue of simplicity, but they otherwise run into the same difficulty, which is just that they don’t exhaustively describe all the things I enjoy or want or prefer. I don’t think my rejection of bananatarianism has to be any more complicated than that.
Something I wrote last year in response to a tangentially related paper:
Yes, I am making the (AFAICT, in your perspective) “incredibly, amazingly strong claim” that in a unified theory, only suffering ultimately matters. In other words, impartial compassion is the ultimate scale (comparator) to decide conflicts between expected suffering vs. other values (whose common basis for this comparison derives from their complete, often context-dependent relationship to expected suffering, including accounting for the wider incentives & long-term consequences from breaking rules that are practically always honored).
Roughly? Suffering is not an arbitrary foundation for unification (for a “common currency” underlying an ideally shared language for cause prioritization). Suffering is the clearest candidate for a thing we all find terminally motivating, at least once we know what we’re talking about (i.e., aren’t completely out of touch with the nature of extreme suffering, as evaluators & comparators of experiences are expected not to be). Personally, I avoid the semantics of arguing over what we “should” care about. Instead, I attempt to find out what I do care about, what these values’ motivational currency is ultimately derived from, and how could I unify these findings into a psychologically realistic model with consistent practical implications & minimal irreconcilable contradictions (such as outweighing between multiple terminal values, because I’m a skeptic of aggregation over space and time; aggregate experiences physically exist only as thoughts not fit to outweigh suffering, which only preventing more suffering can do).
I agree re: bananatarianism, but there’s more to unpack from the suffering-motivated unification than meets the eye.
No verbal descriptions can exhaustively describe all the things we enjoy, want, or prefer, because our inner homeostatic & psychological dynamics contain processes that are too multidimensional for simple overarching statements. What we can do, is unpack the implications of {“Only suffering ultimately matters”} to see how this can imply, predict, retrodict, and explain our other motivations.
In evolutionary and developmental history terms, we can see at the first quick glance that many (if not immediately all) of our other motivations interact with suffering, or have interacted with our suffering in the past (individually, neurally, culturally, evolutionarily). They serve functions of group cohesion, coping with stress, acquiring resources, intimacy, adaptive learning & growth, social deterrence, self-protection, understanding ourselves, and various other things we value & honor because they make life easier or interesting. Neither NU nor other systems will honor all of our perceived wants as absolutes to maximize (reproduction) or to even respect at all (revenge; animalistic violence; desire for personal slaves or worship), but most of our intuitively nominated “terminal” values need not be overriden by the slightest suffering, because they do serve strong functions to prevent suffering, especially when they seem to us like autonomous goals without constantly reminding us of how horrible things did and would happen without them. NU simply claims that it is the most diplomatic solution for a unified theory to de-attach from other values as absolutes, and respect them to the degree that we need them (practicing epistemic uncertainty when we do not yet understand the full role of something we intuitively deeply value!). This may in practice lead to great changes, ideally in directions of more self-compassion and general compassion for others without our other values overriding our motivation to prevent as many cases of extreme suffering as we can.
A considerate rejection of NU needs to be more complicated than of bananatarianism, because the unity, applicability, and explanatory power of NU relies on its implications (instead of explicit absolute rules or independent utility assignments for exemplary grounding-units of every value—challenges for other systems), and its weights of instrumental value depend not on static snapshots of worlds, but on the full context and counterfactuals epistemically accessible to us in each situation. In extreme situations, we may decide it worthwhile to simulate the expected long-term consequences of possibly bending rules that we normally accept as near-absolute heuristics to save ourselves from resource-intensive overthinking (e.g., the degree to which we respect someone’s autonomy, in emergencies). This doesn’t imply that superweapon research is a low-hanging fruit for aspiring suffering-minimizers (for reasons I won’t detail right now, because I find the world-wiping objections worth addressing mostly in the context of AGI assumptions; my primary interest here is unification for practical cause prioritization, worth noting).
To actually reject NU, you must explain what makes something (other than suffering) terminally valuable (or as I say, motivating) beyond its instrumental value for helping us prevent suffering in the total context. This instrumental value is multifaceted and can be derived from various kinds of relationships to suffering. So other “terminal” values may serve important purposes, including that they help us (some examples in parentheses):
cope with suffering (coping mechanisms, friendship, community)
avoid ruminating on suffering (explicit focus on expansive, positive language and goals that don’t contain reminders of their possibly suffering-mediated usefulness)
re-interpret suffering (humor, narratives, catharsis)
prevent suffering (science, technology, cognitive skills & tools)
understand suffering (wide & deep personal experience, culture)
predict suffering (science)
skip epistemic difficulties of trying to optimize others’ suffering for them (autonomy)
prevent the abuse of our being motivated by suffering (human rights, justice system, deterrence)
help others’ suffering (life, health, freedom, personal extra resources to invest as we see fit, reducing x-risk)
resilience against suffering (experience, intelligence, learning, cultural memory)
safety against emergencies (family, intimacy, community, institutions)
help us relax and regain our ability to help (good food, joy, replenishing activities)
avoid a horrible, anxiety-epidemic spreading societal collapse (not getting caught secretly killing people and everyone who knew them, in the name of compassion, by not doing this)
To reject NU, is there some value you want to maximize beyond self-compassion and its role for preventing suffering, at the risk of allowing extreme suffering? How will you tell this to someone undergoing extreme suffering?
NUs are not saying you are deluded for valuing multiple things. But you may be overly attached to them if you—beyond self-compassion—would want to spend your attention on copying/boosting instances of them rather than on preventing others from having to undergo extreme suffering.
After writing this, I wonder if the actual disagreement is still the fear that an NU-AGI would consider humans less {instrumentally valuable for preventing suffering} than it would consider {our suffering terminally worth preventing}. This feels like a very different conversation than what would be a useful basis for a common language of cause prioritization.
Seems like all of this could also be said of things like “preferences”, “enjoyment”, “satisfaction”, “feelings of correctness”, “attention”, “awareness”, “imagination”, “social modeling”, “surprise”, “planning”, “coordination”, “memory”, “variety”, “novelty”, and many other things.
“Preferences” in particular seems like an obvious candidate for ‘thing to reduce morality to’; what’s your argument for only basing our decisions on dispreference or displeasure and ignoring positive preferences or pleasure (except instrumentally)?
I’m not sure I understand your argument here. Yes, values are complicated and can conflict with each other. But I’d rather try to find reasonable-though-imperfect approximations and tradeoffs, rather than pick a utility function I know doesn’t match human values and optimize it instead just because it’s uncomplicated and lets us off the hook for thinking about tradeoffs between things we ultimately care about.
E.g., I like pizza. You could say that it’s hard to list every possible flavor I enjoy in perfect detail and completeness, but I’m not thereby tempted to stop eating pizza, or to try to reduce my pizza desire to some other goal like ‘existential risk minimization’ or ‘suffering minimization’. Pizza is just one of the things I like.
E.g.: I enjoy it. If my friends have more fun watching action movies than rom-coms, then I’ll happily say that that’s sufficient reason for them to watch more action movies, all on its own.
Enjoying action movies is less important than preventing someone from being tortured, and if someone talks too much about trivial sources of fun in the context of immense suffering, then it makes sense to worry that they’re a bad person (or not sufficiently in touch with their compassion).
But I understand your position to be not “torture matters more than action movies”, but “action movies would ideally have zero impact on our decision-making, except insofar as it bears on suffering”. I gather that from your perspective, this is just taking compassion to its logical conclusion; assigning some more value to saving horrifically suffering people than to enjoying a movie is compassionate, so assigning infinitely more value to the one than the other seems like it’s just dialing compassion up to 11.
One reason I find this uncompelling is that I don’t think the right way to do compassion is to ignore most of the things people care about. I think that helping people requires doing the hard work of figuring out everything they value, and helping them get all those things. That might reduce to “just help them suffer less” in nearly all real-world decisions nowadays, because there’s an awful lot of suffering today; but that’s a contingent strategy based on various organisms’ makeup and environment in 2019, not the final word on everything that’s worth doing in a life.
I’ll tell them I care a great deal about suffering, but I don’t assign literally zero importance to everything else.
NU people I’ve talked to often worry about scenarios like torture vs. dust specks, and that if we don’t treat happiness as literally of zero value, then we might make the wrong tradeoff and cause immense harm.
The flip side is dilemmas like:
Suppose you have a chance to push a button that will annihilate all life in the universe forever. You know for a fact that if you don’t push it, then billions of people will experience billions upon billions of years of happy, fulfilling, suffering-free life, filled with richness, beauty, variety, and complexity; filled with the things that make life most worth living, and with relationships and life-projects that people find deeply meaningful and satisfying.
However, you also know for a fact that if you don’t push the button, you’ll experience a tiny, almost-unnoticeable itch on your left shoulder blade a few seconds later, which will be mildly unpleasant for a second or two before the Utopian Future begins. With this one exception, no suffering will ever again occur in the universe, regardless of whether you push the button. Do you push the button, because your momentary itch matters more than all of the potential life and happiness you’d be cutting out?
And if you say “I don’t push the button, but only because I want to cooperate with other moral theorists” or “I don’t push the button, but only because NU is very very likely true but I have nonzero moral uncertainty”: do you really think that’s the reason? Does that really sound like the prescription of the correct normative theory (modulo your own cognitive limitations and resultant moral uncertainty)? If the negotiation-between-moral-theories spat out a slightly different answer, would this actually be a good idea?
This comment doesn’t seem to sufficiently engage with (what I saw as) the core question Rob was asking (and which I would ask), which was:
You briefly note “you may be overly attached to them”, but this doesn’t give any arguments for why I might be overly attached to them, instead of attached to them the correct amount.
When you ask:
My response is “to reject NU, all I have to do is terminally care about anything other than suffering. I care about things other than suffering, ergo NU must be false, and the burden is on other people to explain what is wrong with my preferences.”
I used to consider myself NU, but have since then rejected it.
Part of my rejection was that, on a psychological level, it simply didn’t work for me. The notion that everything only has value to the extent that it reduces suffering meant that most of the things which I cared about, were pointless and meaningless except for their instrumental value in reducing my suffering or making me more effective at reducing suffering. Doing things which I enjoyed, but constantly having a nagging sensation of “if I could just learn to no longer need this, then it would be better for everyone” basically meant that it was very hard to ever enjoy anything. It was basically setting my mind up to be a battlefield, dominated by an NU faction trying to suppress any desires which did not directly contribute to reducing suffering, and opposed by an anti-NU faction which couldn’t do much but could at least prevent me from getting any effective NU work done, either.
Eventually it became obvious that even from an NU perspective, it would be better for me to stop endorsing NU, since that way I might end up actually accomplishing more suffering reduction than if I continued to endorse NU. And I think that this decision was basically correct.
A related reason is that I also rejected the need for a unified theory of value. I still think that if you wanted to reduce human values into a unified framework, then something like NU would be one of the simplest and least paradoxical answers. But eventually I concluded that any simple unified theory of value is likely to be wrong, and also not particularly useful for guiding practical decision-making. I’ve written more about this here.
Finally, and as a more recent development, I notice that NU neglects to take into account non-suffering-based preferences. My current model of minds and suffering is that minds are composed of many different subagents with differing goals; suffering is the result of the result of different subagents being in conflict (e.g. if one subagent wants to push through a particular global belief update, which another subagent does not wish to accept).
This means that I could imagine an advanced version of myself who had gotten rid of all personal suffering, but was still motivated by pursue other goals. Suppose for the sake of argument that I only had subagents which cared about 1) seeing friends 2) making art. Now if my subagents reached agreement of spending 30% of their time making art and 70% of their time seeing friends, then this could in principle eliminate my suffering by removing subagent conflict, but it would still be driving me to do things for reasons other than reducing suffering. Thus the argument that suffering is the only source of value fails; the version of me which had eliminated all personal suffering might be more driven to do things than the current one! (since subagent conflict was no longer blocking action in any situation)
As a practical matter, I still think that reducing suffering is one of the most urgent EA priorities: as long as death and extreme suffering exist in the world, anything that would be called “altruism” should focus its efforts on reducing that. But this is a form of prioritarianism, not NU. I do not endorse NU’s prescription that an entirely dead world would be equally good or better as a world with lots of happy entities, simply because there are subagents within me who would prefer to exist and continue to do stuff, and also for other people to continue to exist and do stuff if they so prefer. I want us to liberate people’s minds from involuntary suffering, and then to let people do whatever they still want to do when suffering is a thing that people experience only voluntarily.
I think most of this is compatible with preference utilitarianism (or consequentialism generally), which, in my view, is naturally negative. Nonnegative preference utilitarianism would hold that it could be good to induce preferences in others just to satisfy these preferences, which seems pretty silly.
Thanks for the perspective.
I agree that even NU may imply rejecting NU in its present form, because it does not feel like a psychologically realistic theory to constantly apply in everyday life; we are more motivated to move towards goals and subgoals that do not carry explicit reminders of extreme suffering on the flip side.
I do feel that I am very close to NU whenever I consider theoretical problems and edge-cases that would outweigh extreme suffering with anything else than preventing more extreme suffering. In practice, it may be more applicable (and ironically, more useful according to NU) to replace NU with asymptotically approaching uncompromising compassion, or an equivalent positive-language narrative, while taking huge grains of epistemic humility whenever it starts to feel like a good idea to override any explicit preferences of others.
Epistemic humility is also why I cannot imagine a situation where I would push The Button, because I cannot imagine how I could know with a certainty that it works (so I cannot derive an intuition out of that thought experiment).
I believe the most often cited (in the LW/EA communities) paper arguing against NU is Toby Ord’s Why I’m Not a Negative Utilitarian. This and this seem to be the main replies to it from NU perspectives. (I think I’ve skimmed some of these articles but have not actually considered the arguments carefully.)
Ethical theories don’t need to be simple. I used to have the belief that ethical theories ought to be simple/elegant/non-arbitrary for us to have a shot at them being the correct theory, a theory that intelligent civilizations with different evolutionary histories would all converge on. This made me think that NU might be that correct theory. Now I’m confident that this sort of thinking was confused: I think there is no reason to expect that intelligent civilizations with different evolutionary histories would converge on the same values, or that there is one correct set of ethics that they “should” converge on if they were approaching the matter “correctly”. So, looking back, my older intuition feels confused now in a similar way as ordering the simplest food in a restaurant in expectation of anticipating what others would order if they also thought that the goal was that everyone orders the same thing. Now I just want to order the “food” that satisfies my personal criteria (and these criteria do happen to include placing value on non-arbitrariness/simplicity/elegance, but I’m a bit less single-minded about it).
Your way of unifying psychological motivations down to suffering reduction is an “externalist” account of why decisions are made, which is different from the internal story people tell themselves. Why think all people who tell different stories are mistaken about their own reasons? The point “it is a straw man argument that NUs don’t value life or positive states“ is unconvincing, as others have already pointed out. I actually share your view that a lot of things people do might in some way trace back to a motivating quality in feelings of dissatisfaction, but (1) there are exceptions to that (e.g., sometimes I do things on auto-pilot and not out of an internal sense of urgency/need, and sometimes I feel agenty and do things in the world to achieve my reflected life goals rather than tend to my own momentary well-being), and (2) that doesn’t mean that whichever parts of our minds we most identify with need to accept suffering reduction as the ultimate justification of their actions. For instance, let’s say you could prove that a true proximate cause why a person refused to enter Nozick’s experience machine was that, when they contemplated the decision, they felt really bad about the prospect of learning that their own life goals are shallower and more self-centered than they would have thought, and *therefore* they refuse the offer. Your account would say: “They made this choice driven by the avoidance of bad feelings, which just shows that ultimately they should accept the offer, or choose whichever offer reduces more suffering all-things-considered.“ Okay yeah, that’s one story to tell. But the person in question tells herself the story that she made this choice because she has strong aspirations about what type of person she wants to be. Why would your externally-imported justification be more valid (for this person’s life) than her own internal justification?
Thanks for the replies, everyone!
I don’t have the time to reply back individually, but I read them all and believe these to be pretty representative of the wider community’s reasons to reject NU as well.
I can’t speak for those who identify strictly as NU, but while I currently share many of NU’s answers to theoretical outweighing scenarios, I do find it difficult to unpack all the nuance it would take to reconcile “NU as CEV” with our everyday experience.
Therefore, I’ll likely update further away from
{attempting to salvage NU’s reputation by bridging it with compassion, motivation theory, and secular Buddhism}
towards
{integrating these independent of NU, seeing if this would result in a more relatable language, or if my preferred kind of theoretical unity (without pluralist outweighing) would still have the cost of its sounding absurd and extreme on its face}
Did you make any update regarding the simplicity / complexity of value?
My impression is that theoretical simplicity is a major driver of your preference for NU, and also that if others (such as myself) weighed theoretical simplicity more highly that they would likely be more inclined towards NU.
In other words, I think theoretical simplicity may be a double crux in the disagreements here about NU. Would you agree with that?
Yes, in terms of how others may explicitly defend the terminal value of even preferences (tastes, hobbies), instead of defending only terminal virtues (health, friendship), or core building blocks of experience (pleasure, beauty).
No, in terms of assigning anything {independent positive value}.
I experience all of the things quoted in Complexity of value,
but I don’t know how to ultimately prioritize between them unless they are commensurable. I make them commensurable by weighing their interdependent value in terms of the one thing we all(?) agree is an independent motivation: preventable suffering. (If preventable suffering is not worth preventing for its own sake, what is it worth preventing for, and is this other thing agreeable to someone undergoing the suffering as the reason for its motivating power?) This does not mean that I constantly think of them in these terms (that would be counterproductive), but in conflict resolution I do not assign them independent positive numerical values, which pluralism would imply one way or another.
Any pluralist theory begs the question of outweighing suffering with enough of any independently positive value. If you think about it for five minutes, aggregate happiness (or any other experience) does not exist. If our first priority is to prevent preventable suffering, that alone is an infinite game; it does not help to make a detour to boost/copy positive states unless this is causally connected to preventing suffering. (Aggregates of suffering do not exist either, but each moment of suffering is terminally worth preventing, and we have limited attention, so aggregates and chain-reactions of suffering are useful tools of thought for preventing as many as we can. So are many other things without requiring our attaching them independent positive value, or else we would be tiling Mars with them whenever it outweighed helping suffering on Earth according to some formula.)
My experience so far with this kind of unification is that it avoids many (or even all) of the theoretical problems that are still considered canonical challenges for pluralist utilitarianisms that assign both independent negative value to suffering and independent positive value to other things. I do not claim that this would be simple or intuitive – that would be analogous to reading about some Buddhist system, realizing its theoretical unity, and teleporting past its lifelong experiential integration – but I do claim that a unified theory with grounding in a universally accepted terminal value might be worth exploring further, because we cannot presuppose that any kind of CEV would be intuitive or easy to align oneself with.
Partly, yes. It may also be that all of us, me included, are out of touch with the extreme ends of experience and thus do not understand the ability of some motivations to override everything else.
It is also difficult to operationalize a false belief in independent value: When are we attached to a value to the extent that we would regret not spending its resources elsewhere, on CEV-level reflection?
People also differ along their background assumptions on whether AGI makes the universally life-preventing button a relevant question, because for many, the idea of an AGI represents an omnipotent optimizer that will decide everything about the future. If so, we want to be careful about assigning independent positive value to all the things, because each one of those invites this AGI to consider {outweighing suffering} with {producing those things}, since pluralist theories do not require a causal connection between the things being weighed.
Are they? Many of us seem to have accepted that our values are complex.
This seems like an argument that it would be convenient if our values were simple. This does not seem like strong evidence that they actually are simple. (Though I grant that you could make an argument that it might be better to try to achieve only part of what we value if we’re much more likely to be successful that way.)
I reject impartiality on the grounds that I’m a personal identity and therefore not impartial. The utility of others is not my utility, therefore I am not a utilitarian. I reject unconditional altruism in general for this reason. It amazes me in hindsight that I was ever dumb enough to think otherwise.
Teach, no, but there are some intuitions that can be evoked. I’d personally take a 10:1 ratio between pleasure and pain; if I get 10 times more pleasure out of something, I’ll take any pain as a cost. It’s just usually not realistic, which is why I don’t agree that life has generally positive value.
There are fictional descriptions of extreme pleasure enhancement and wireheading, e.g. in fantasy that describe worthwhile states of experience. The EA movement is fighting against wireheading, as you can see in avturchin’s posts. But I think such a combination of enhancement + wireheading could plausibly come closest to delivering net-positive value of life, if it could be invented (although I don’t expect it in my lifetime, so it’s only theoretical). Here’s an example from fiction:
When I say that I’m a utilitarian (or something utilitarian-ish), I mean something like: If there were no non-obvious bad side-effects — e.g., it doesn’t damage my ability to have ordinary human relationships in a way that ends up burning more value than it creates — I’d take a pill that would bind my future self to be unwilling to sacrifice two strangers to save a friend (or to save myself), all else being equal.
The not-obviously-confused-or-silly version of utilitarianism is “not reflectively endorsing extreme partiality toward yourself or your friends relative to strangers,” rather than “I literally have no goals or preferences or affection for anything other than perfectly unbiased maximization of everyone’s welfare’”.
If you flip the Rachels-Temkin spectrum argument (philpapers.org/archive/NEBTGT.pdf), then some tradeoff between happiness and suffering is needed to keep transitive preferences, which is necessary to avoid weird conclusions like accepting suffering to avoid happiness. As long as you don’t think theres some suffering threshold where 1 more util of suffering is infinitely worse than anything else, then this makes sense.
Also NU in general has a bad reputation in the philosophy community (more than classical utilitarianism I think) so it’s better EAs don’t endorse it.
Can you give a practical example of a situation where I would be hereby forced to admit that happiness has terminal value above its instrumental value for my preventing as many suffering moments as I can?
I don’t see why {resolving conflicts by weighing everything (ultimately) in suffering} would ever lead me to {“accept suffering to avoid happiness”}, if happiness already can be weighed against suffering in terms of its suffering-preventing effects—just not by itself, which is what many other utilitarianisms rely on, inviting grotesque problems like doctors having parties so great that they outweigh the untreated suffering of their patients.
Are there also practical situations where I’d want to admit that paperclips have terminal value, or else accept suffering to avoid paperclips?
I don’t see what hidden assumptions I’m missing here. I certainly don’t think an infinitely large paperclip is an acceptable comparand to outweigh any kind of suffering. In the case of happiness, it depends completely on whether the combined causal cascades from this happiness are expected to prevent more suffering than the current comparand suffering: no need to attach any independent numerical terminal value to happiness itself, or we’d be back to counting happy sheep believing it to outweigh someone’s agony any moment now.
I believe the first part of this statement may currently be true for the WEIRD (western, educated, industrialized, rich, democratic) philosophy community. Other parts of the world have long histories and living traditions of suffering-based views, primarily various forms of Buddhism. In what I’ve read about Mahayana Buddhism (or the Bodhisattva path), compassion is often explicitly identified as the only necessary motivation that implies and/or transcends all the outwardly visible customs, rules, and ethics, and that compassion is the voice to listen to when other “absolutes” conflict. (Omnicidal superweapon research is not part of these philosophies of compassion, but invented, in my estimation, as an implication of NU by later armchair rationalists to easily dismiss NU.)
I’ll take the second part of your statement as your current personal opinion of NU in its present form and perceived reputation. I am personally still optimistic that suffering is the most universal candidate to derive all other values from, and I would be careful not to alienate a large segment of systematic altruists such as might be found among secular, rationalist Buddhists. I mostly agree though, that NU in its present form may be tainted by the prevalence of the world-destruction argument (even though it is argued to represent only a straw man NU by proponents of NU).