Net Utility and Planetary Biocide
I’ve started listening to the audiobook of Peter Singer’s Ethics in the Real World, which is both highly recommended and very unsettling. The essays on non-human animals, for example, made me realize for the first time that it may well be possible that the net utility on Earth over all conscious creatures is massively negative.
Naturally, this led me to wonder whether, after all, efforts to eradicate all consciousness on Earth—human and non-human—may be ethically endorsable.This, in turn, reminded me of a recent post on LW asking whether the possibility of parallelized torture of future uploads justifies killing as many people as possible today.
I had responded to that post by mentioning that parallelizing euphoria was also possible, so this should cancel things out. This seemed at the time like a refutation, but I realized later I had made the error of equating the two, utility and disutility, as part of the same smooth continuum, like [-100, 100] ∈ R. There is no reason to believe the maximum disutility I can experience is equal in magnitude to the maximum utility I can experience. It may be that max disutility is far greater. I really don’t know, and I don’t think introspection is as useful in answering this question as it seems intuitively to be, but it seems quite plausible for this to be the case.
As these thoughts were emerging, Singer, as if hearing my concerns, quoted someone or other who claimed that the human condition is one of perpetual suffering, constantly seeking desires which, once fulfilled, are ephemeral and dissatisfying, and therefore it is a morally tragic outcome for any of us to have emerged into existence.
Of course these are shoddy arguments in support of Mass Planetary Biocide, even supposing the hypothesis that the Earth (universe?) has net negative utility is true. For one, we can engineer minds somewhere in a better neighborhood of mindspace, where utility is everywhere positive. Or maybe it’s impossible even in theory to treat utility and disutility like real-valued functions of physical systems over time (though I’m betting it is). Or maybe the universe is canonically infinite, so even if 99% of conscious experiences in the universe have disutility, there are infinite quantities of both utility and disutility and so nothing we do matters, as Bostrom wrote about. (Although this is actually not an argument against MPB, just not one for it). And anyway, the state of net utility today is not nearly as important as the state of net utility could potentially be in the future. And perhaps utilitarianism is a naive and incorrect ethical framework.
Still, I had somehow always assumed implicitly that net utility of life on Earth was positive, so the realization that this need not be so is causing me significant disutility.
- 10 Apr 2017 12:59 UTC; 9 points) 's comment on Net Utility and Planetary Biocide by (
First of all, I don’t think that morality is objective as I’m a proponent of moral anti-realism. That means that I don’t believe that there is such a thing as “objective utility” that you could objectively measure.
But, to use your terms, I also believe that there currently exists more “disutility” than “utility” in the world. I’d formulating it this way: I think there exists more suffering (disutility, disvalue, etc.) than happiness (utility, value, etc.) in the world today. Note that this is just a consequence of my own personal values, in particular my “exchange rate” or “trade ratio” between happiness and suffering: I’m (roughly) utilitarian but I give more weight to suffering than to happiness. But this doesn’t mean that there is “objectively” more disutility than utility in the world.
For example, I would not push a button that creates a city with 1000 extremely happy beings but where 10 people are being tortured. But a utilitarian with a more positive-leaning trade ratio might want to push the button because the happiness of the 1000 outweighs the suffering of the 10. Although we might disagree, neither of us is “wrong”.
Similar reasoning applies with regards to the “expected value” of the future. Or to use a less confusing term: The ratio of expected happiness to suffering of the future. Crucially, this question has both an empirical as well as a normative component. The expected value (EV) of the future for a person will both depend on her normative trade ratio as well as her empirical beliefs about the future.
I want to emphasize, however, that even if one thinks that the EV of the future is negative, one should not try to destroy the world! There are many reasons for this so I’ll just pick a few: First of all, it’s extremely unlikely that you will succeed and will probably only cause more suffering in the process. Secondly, planetary biocide is one of the worst possible things one can do according to many value systems. I think it’s extremely important to be nice to other value systems and promote cooperation among their proponents. If you attempted to implement planetary biocide you would cause distrust, probably violence and the breakdown of cooperation, which will only increase future suffering, hurting everyone in expectation.
Below, I list several more relevant essays that expand on what I’ve written here and which I can highly recommend. Most of these link to the Foundational Research Institute (FRI) which is not a coincidence as FRI’s mission is to identify cooperative and effective strategies to reduce future suffering.
I. Regarding the empirical side of future suffering
Reducing Risks of Astronomical Suffering: A Neglected Priority.
Against Wishful Thinking
Risks of Astronomical Future Suffering
II. On the benefits of cooperation
Gains from Trade through Compromise
Differential Intellectual Progress as a Positive-Sum Project
Reasons to Be Nice to Other Value Systems
III. On ethics
Measuring Happiness and Suffering
What Is the Difference Between Weak Negative and Non-Negative Ethical Views?
Are pain and pleasure equally energy-efficient?
I would add that—according to MWI—even if you succeed at planetary biocide, it simply means you are removing life from those Everett branches where humanity is able to successfully accomplish planetary biocide. Which are coincidentally also the branches which have highest chance to eliminate or reduce the suffering in the future.
It would be quite sad if the last filter towards achieving paradise would be that any civilization capable of achieving the paradise would realise that it is not there yet and that the best course of action is to kill itself.
I’m not convinced that perpetual suffering is particularly human. We could be the species of animal that suffers least on an average day, since we have better solutions to hunger and thirst than anyone else and no predator is likely to disembowel us and our offspring in our sleep.
So it seems to me what you’re really doing is questioning the value of (conscious) life itself. Is that right?
It is an old question that has been answered many ways, because no single answer has appealed to everybody. Buddhism is one answer that I particularly dislike but is apparently soothing to many.
To me, an indictment of life itself as not worth living is a reductio ad absurdum of the whole project of reducing the complexity of literally everything to a single one-dimensional utility-disutility scale to which everything is commensurable. (The paperclip maximizer is another.)
My personal supposition is that (conscious) life is an engine that runs on (conscious) suffering to produce (conscious) understanding. And since there are probably innumerable lifeless universes, I’d rather have one with suffering and understanding in it, if only for variety, than prefer another lifeless one. I don’t expect to convince you, I’m just saying this works for me.
Regarding your last point: is a hellish world preferable to an empty one?
Yes, because it has more potential for improvement.
The Earth of a million years ago, where every single animal was fighting for its life in an existence of pain and hunger, was more hellish than the present one, where at least a percent or so are comparatively secure. So that’s an existence proof of hellishness going away.
Emptiness doesn’t go away. Empty worlds evidently tend to stay empty. We now see enough of them well enough to know that.
Obligatory xkcd.
That someone wouldn’t be Buddha, would it?
Most sentient creatures can commit suicide. The great majority don’t. You think they are all wrong?
(I don’t think this is about right or wrong. But we can try to exchange arguments and intuition pumps and see if someone changes their mind.)
Imagine a scientist that engineered artificial beings destined to a life in constant misery but equipped with an overriding desire to stay alive and conscious. I find that such an endeavor would not only be weird or pointless, but something I’d strongly prefer not to happen. Maybe natural selection is quite like that scientist; it made sure organisms don’t kill themselves not by making it easy for everyone to be happy, but by installing instinctual drives for survival.
Further reasons (whether rational or not) to not commit suicide despite having low well-being include fear of consequences in an afterlife, impartial altruistic desires to do something good in the world, “existentialist” desires to not kill oneself without having lived a meaningful life, near-view altruistic desires to not burden one’s family or friends, fear of dying, etc. People often end up not doing things that would be good for them and their goals due to trivial inconveniences, and suicide seems more “inconvenient” than most things people get themselves to do in pursuit of their interests. Besides, depressed people are not exactly known for high willpower.
Biases with affective forecasting and distorted memories could also play a role. (My memories from high school are pretty good even though when you’d travel back and ask me how I’m doing, most of the time the reply would be something like “I’m soo tired and don’t want to be here!.”)
Then there’s influence from conformity: I saw a post recently about a guy in Japan who regularly goes to a suicide hotspot to prevent people from jumping. Is he doing good or being an asshole? Most people seem to have the mentality that suicide is usually (or always even) bad for the person who does it. While there are reasons to be very careful with irreversible decisions – and certainly many suicides are impulsive and therefore at high risk of bias – it seems like there is an unreasonably strong anti-suicide ideology. Not to mention the religious influences on the topic.
All things considered, it wouldn’t surprise me if some people also just talk themselves out of suicide with whatever they manage to come up with, whether that is rational given their reflective goals or not. Relatedly, another comment here advocates to try change what you care about in order to avoid being a Debbie Downer to yourself and others: http://lesswrong.com/r/discussion/lw/ovh/net_utility_and_planetary_biocide/dqub
Also relevant is whether, when evaluating the value of a person’s life, are we going with overall life satisfaction or the average momentary well-being? Becoming a mother expectedly helps with the former but is bad for the latter – tough choice.
Caring substantially about anything other than one’s own well-being makes suicide the opposite of a “convergent drive” – agents whose goals include facets of the outside world will want to avoid killing themselves at high costs, because that would prevent them from further pursuit of these goals. We should therefore distinguish between “Is a person’s life net positive according to the person’s goals?” and “Is a life net positive in terms of all the experience moments it adds to the universe’s playlist?” The latter is not an empirical question; it’s more of an aesthetic judgment relevant to those who want to pursue a notion of altruism that is different from just helping others go after their preferences, and instead includes concern for (a particular notion of) well-being.
This will inevitably lead to “paternalistic” judgments where you want the universe’s playlist to be a certain way, conflicting with another agent’s goals. Suppose my life is very happy but I don’t care much for staying alive – then some would claim I have an obligation to continue living, and I’d be doing harm to their preferences if I’m not sufficiently worried about personal x-risks. So the paternalism goes both ways; it’s not just something that suffering-focused views have to deal with.
Being cooperative in the pursuit of one’s goals gets rid of the bad connotations of paternalism. It is sensible to think that net utility is negative according to one’s preferences for the playlist of experience moments, while not concluding that this warrants strongly violating other people’s preferences.
Also relevant: SSC’s “How Bad Are Things?”.
The survival instinct part, very probably, but the “constant misery” part doesn’t look likely.
Actually, I don’t understand where the “animals have negative utility” thing is coming from. Sure, let’s postulate that fish can feel pain. So what? How do you know that fish don’t experience intense pleasure from feeling water stream by their sides?
I just don’t see any reasonable basis for deciding what the utility balance for most animals looks like. And from the evolutionary standpoint the “constant misery” is nonsense—constant stress is not conducive to survival.
Are we talking about humans now? I thought the OP considered humans to be more or less fine, it’s the animals that were the problem.
Does anyone claim that the net utility of humanity is negative?
I have no idea what this means.
Ah. Well then, let’s kill everyone who fails our aesthetic judgment..?
That’s a very common attitude—see e.g. attitudes to abortion, to optional wars, etc. However “paternalistic” implies an imbalance of power—you can’t be paternalistic to an equal.
Agree, I meant to use the analogy to argue for “Natural selection made sure that even those beings in constant misery may not necessarily exhibit suicidal behavior.” (I do hold the view that animals in nature suffer a lot more than they are happy, but that doesn’t follow from anything I wrote in the above post.)
Right, but I thought your argument about sentient beings not committing suicide refers to humans primarily. At least with regard to humans, exploring why the appeal to low suicide rates may not show much seems more challenging. Animals not killing themselves could just be due to them lacking the relevant mental concepts.
It’s a metaphor. Views on population ethics reflect what we want the “playlist” of all the universe’s experience moments to be like, and there’s no objective sense of “net utility being positive” or not. Except when you question-beggingly define “net utility” in a way that implies a conclusion, but then anyone who disagrees will just say “I don’t think we should define utility that way” and you’re left arguing over the same differences. That’s why I called it “aesthetic” even though that feels like it doesn’t give the seriousness of our moral intuitions due justice.
(And force everyone to live against their will if they do conform to it?) No; I specifically said not to do that. Viewing morality as subjective is supposed to make people more appreciative that they cannot go around completely violating the preferences of those they disagree with without the result being worse for everyone.
Lukas, I wish you had a bigger role in this community.
Not sure this is the case. I would expect that natural selection made sure that no being is systematically in constant misery and so there is no need for the “but if you are in constant misery you can’t suicide anyways” part.
I still don’t understand what that means. Are you talking about believing that other people should have particular ethical views and it’s bad if they don’t?
Well, the OP thinks it might be reasonable to kill everything with a nervous system because in his view all of them suffer too much. However if that is just an aesthetic judgement...
Well, clearly not everyone since you will have winners and losers. And to evaluate this on the basis of some average/combined utility requires you to be a particular kind of utilitarian.
I’m trying to say that other people are going to disagree with you or me about how to assess whether a given life is worth continuing or worth bringing into existence (big difference according to some views!), and on how to rank populations that differ in size and the quality of the lives in them. These are questions that the discipline of population ethics deals with, and my point is that there’s no right answer (and probably also no “safe” answer where you won’t end up disagreeing with others).
This^^ is all about a “morality as altruism” view, where you contemplate what it means to “make the world better for other beings.” I think this part is subjective.
There is also a very prominent “morality as cooperation/contract” view, where you contemplate the implications of decision algorithms correlating with each other, and notice that it might be a bad idea to adhere to principles that lead to outcomes worse for everyone in expectation provided that other people (in sufficiently similar situations) follow the same principles. This is where people start with whatever goals/preferences they have and derive reasons to be nice and civil to others (provided they are on an equal footing) from decision theory and stuff. I wholeheartedly agree with all of this and would even say it’s “objective” – but I would call it something like “pragmatics for civil society” or maybe “decision theoretic reasons for cooperation” and not “morality,” which is the term I reserve for (ways of) caring about the well-being of others.
It’s pretty clearly apparent that “killing everyone on earth” is not in most people’s interest, and I appreciate that people are pointing this out to the OP. However, I think what the replies are missing is that there is a second dimension, namely whether we should be morally glad about the world as it currently exists, and whether e.g. we should make more worlds that are exactly like ours, for the sake of the not-yet-born inhabitants of these new worlds. This is what I compared to voting on what the universe’s playlist of experience moments should be like.
But I’m starting to dislike the analogy. Let’s say that existing people have aesthetic preferences about how to allocate resources (this includes things like wanting to rebuild galaxies into a huge replica of Simpsons characters because it’s cool), and of these, a subset are simultaneously also moral preferences in that they are motivated by a desire to do good for others, and these moral preferences can differ in whether they count it as important to bring about new happy beings or not, or how much extra happiness is needed to altruistically “compensate” (if that’s even possible) for the harm of a given amount of suffering, etc. And the domain where people compare each others’ moral preferences and try to see if they can get more convergence through arguments and intuition pumps, in the same sense as someone might start to appreciate Mozart more after studying music theory or whatever, is population ethics (or “moral axiology”).
Of course, that’s a given.
So is this discipline basically about ethics of imposing particular choices on other people (aka the “population”)? That makes it basically the ethics of power or ethics of the ruler(s).
You also call it “morality as altruism”, but I think there is a great deal of difference between having power to impose your own perceptions of “better” (“it’s for your own good”) and not having such power, being limited to offering suggestions and accepting that some/most will be rejected.
What happens with this view if you accept that diversity will exist and at least some other people will NOT follow the same principles? Simple game theory analyses in a monoculture environment are easy to do, but have very little relationship to real life.
That looks to me like a continuous (and probably multidimensional) value. All moralities operate in terms of “should” an none find the world as it is to be perfect. This means that all contemplate the gap between “is” and “should be” and this gap can be seen as great or as not that significant.
Ask me when you acquire the capability :-)
That’s an interesting way to view it, but it seems accurate. Say God created the world, then contractualist ethics or ethics of cooperation didn’t apply to him, but we’d get a sense of what his population ethical stance must have been.
No one ever gets asked whether they want to be born. This is one of the issues where there is no such thing as “not taking a stance;” how we act in our lifetimes is going to affect what sort of minds there will or won’t be in the far future. We can discuss suggestions and try to come to a consensus of those currently in power, but future generations are indeed in a powerless position.
Sure, but so what? Your power to deliberately and purposefully affect these things is limited by your ability to understand and model the development of the world sufficiently well to know which levers to pull. I would like to suggest that for “far future” that power is indistinguishable from zero.
It would be kind-of surprising if the capabilities to create pleasure and suffering were very asymmetrical. Carl has written a little around this general topic—Are pain and pleasure equally energy-efficient?
We tend to do things we want, not things we don’t want. And entropy tends to increase, not decrease. I would be very surprised if these were uncorrelated; in other words, I would expect doing what we want to overall increase entropy more than doing what we don’t want.
(Obviously, doing what we want decreases entropy in a particular area; but it does this by increasing overall entropy more.)
I’ve not read the comments so perhaps repeating something (or saying something that someone has already refuted/critiques).
I think it’s problematic to net all this out on an individual bases much less some aggregate level even for a single species much less multiple species.
First, we’re adaptive creatures so the scale is always sliding over time and as we act.
Second, disutility feeds into actions that produce utility (using your terms which might be wrong as my meaning here is want/discomfort/non-satisfaction and satisfaction/fulfillment type internal states). If on net a person is on the plus side of the scale you defined what do they do? In this case I’m thinking of some of the scifi themes I’ve read/seen where some VR tool leave the person happy but then they just starve to death.
Finally, isn’t the best counter here the oft stated quip “Life sucks but it’s better than the alternative.” If one accepts that statement then arguments the lead to the conclusion choosing death (and especially the death of others) really need to review the underlying premises. At least one must be false as a general case argument. (I’ll concede that in certain special/individual cases death may be preferred to the conditions of living)
I’ve had arguments before with negative-leaning Utilitarians and the best argument I’ve come up with goes like this...
Proper Utility Maximization needs to take into account not only the immediate, currently existing happiness and suffering of the present slice of time, but also the net utility of all sentient beings throughout all of spacetime. Assuming that the Eternal Block Universe Theory of Physics is true, then past and future sentient beings do in fact exist, and therefore matter equally.
Now the important thing to stress here is then that what matters is not the current Net Utility today but overall Net Utility throughout Eternity. Two basic assumptions can be made about the trends through spacetime. First, that compounding population growth means that most sentient beings exist in the future. Second, that melioristic progress means that the conscious experience is, all other things being equal, more positive in the future than in the past, because of the compounding effects of technology, and sentient beings deciding to build and create better systems, structures, and societies that outlive the individuals themselves.
Sentient agents are not passive, but actively seek positive conscious experiences and try to create circumstances that will perpetuate such things. Thus, as the power of sentient beings to influence the state of the universe increases, so should the ratio of positive to negative. Other things, such as the psychological negativity bias, remain stable throughout history, but compounding factors instead trend upwards at usually an exponential rate.
Thus, assuming these trends hold, we can expect that the vast majority of conscious experiences will be positive, and the overall universe will be net positive in terms of utility. Does that suck for us who live close to the beginning of civilization? Kinda yes. But from a Utilitarian perspective, it can be argued that our suffering is for the Greatest Good, because we are the seeds, the foundation from which so much will have its beginnings.
Now, this can be countered that we do not know that the future really exists, and that humanity and its legacy might well be snuffed out sooner rather than later. In fact, the fact that we are born here now, can be seen as statistical evidence for this, because if on average you are most likely to be born at the height of human existence, then this period of time is likely to be around the maximum point before the decline.
However, we cannot be sure about this. Also, if Many Worlds Interpretation of Quantum Mechanics is true, then even if for most worlds humanity ceases to exist around this time, there still exists a non-trivial percentage of worlds where humanity survives into the far distant future, establishing a legacy among the stars and creates relative utopia through the compound effects aforementioned. For the sake of these possible worlds, and their extraordinarily high expected utility, I would recommend trying to keep life and humanity alive.
I will say the same thing here that I did there: if (and only if) you attempt to kill me, I’ll attempt to kill you back with appropriately much torture to make you fear that outcome. your morality should be selected for being the best for you, logically. I commit to making sure anything that involves attacking me is very bad for you.
Um, obvious solution: redefine your morality. There is no objective morality. If you think the net utility of world is negative, that really says more about you than the world.
And if you are totally sincere in this belief, then honestly: seek professional mental help.
“I’m worried that billions of beings that I care about are suffering constantly, and there’s no way for me to help them.”
“This wouldn’t be a problem for you if you didn’t care about them.”
Morality may not be objective, but that doesn’t mean I can just pick a new morality, or would want to if I could.
Dr. Clippy suggests to focus on making paperclips instead. Universe is going to provide a lot of iron in long term.
I said it would be even easier to simply redefine “paperclip” to mean an atom of iron, regardless of where it is, and already declare my work done, but for some reason Dr. Clippy got really angry after hearing that.
for what it’s worth, I don’t think professional mental health is any good most of the time, and it’s only worth it if you’re actually psychotic. for things that don’t totally destroy your agency and just mostly dampen it, I think just doing things on your own is better.
There’s a lot of evidence that various types of “talk therapy” do improve most physiological conditions, including things like depression, anxiety disorders, ect.
The odd thing in the research is that they all seem to do about the same, which clearly need further explanation. But at the moment, the best evidence we have is that seeing a therapist is a good idea that will most likely benifit a person who needs help.
I think the majority of people (literal majority, not just people who are said to have mental health problems) could benefit from talking to a therapist a couple of times. Most likely the benefit would be exhausted quickly though, which is why I said “a couple of times.”
Thanks for your reply, username2. I am disheartened to see that “You’re crazy” is still being used in the guise of a counterargument.
Why do you think the net utility of the world is either negative or undefined?
Interesting username.
In all seriousness and with all good intent, I am quite serious when I say that thinking the world is without value is in fact a textbook symptom of depression.
But I think you have chosen to ignore the larger point of my comment that morality is really self-determined. Saying that your personal morality leads to an assessment of net negative utility is saying that “my arbitrarily chosen utility function leads to not-useful outcomes.” Well.. pick another.
I think you can confuse yourself by treating utility too literally as a scalar quantity. I won’t argue (here) against interpersonal comparisons, but I will point out that we have a lot of evidence that even people who report lots of pain and suffering and almost no pleasure do not usually commit suicide, nor advise it to younger people when they’re later in life.
This implies pretty strongly that most people’s self-evaluation of their life is NOT a sum of their individual moments.
The topic of hedonic adaptation and the fact that some kinds of memories fade faster than others is another difficulty in the evaluation of retrospective value of living. Individual self-evaluations of a point-in-time change over time—how much it hurts now is simply different from how much I remember it hurting tomorrow. Which value do you use in your sum?
Recently I can’t get past the notion that Pain/Suffering and and Joy/Pleasure shouldn’t be considered to be two poles of the same scale. It just doesn’t feel psychologically realistic. It certainly doesn’t describe my inner life. Pain/Suffering feels like it exists on its own axis, and can go from essentially zero to pretty intense states, and Joy/Pleasure/Whatever is simply a different axis.
I might not go so far as to say that these axes are completely orthogonal simply because it’s pretty hard to feel transcendent joy when you’re feeling profound suffering at the same time, but this doesn’t actually seem like it has to be a fundamental property of all minds. I can feel some pretty good temporary states of joy even when I’m having a really rough time in my life, and I can feel intense waves of suffering even when I’m deeply happy overall.
If you choose to put these two different phenomena on the same scale, and treat them as opposites, then that just leads you to really unpleasant conclusions by construction. You have assumed the conclusion by your decision to treat pain as anti-joy.
I think most people find utopias containing absolutely no suffering to be kind of off-putting. Is the kind of suffering you endure training for a sporting competition something that you would want permanently erased from the universe? I think the types of suffering most people are actually against are the types that come along with destroyed value, and if that’s the case, then just say you’re against destroying value, don’t say you’re against suffering.
For evolved beings, I’d expect maximum pain to exceed maximum pleasure in magnitude. For example, the wolf may get pleasure from eating the deer, but the deer may get pain from being killed. But the evolutionary selection pressure is much stronger on the deer in this case. If the wolf loses the fight, it only loses its lunch and can try again later. But if the deer loses the fight, it loses its life. Psychological studies on humans bear this out. We tend to weight negative events four or five times more strongly than positive ones.
You’re mistaken if you assume this imbalance must apply to the space of all possible minds. We know it’s true of unmodified humans. There’s really no reason it must apply to uploads, is there? Your original counterargument was valid.
The truth is, we really don’t know which creatures are conscious. On the one hand, I’m quite confident that animals that can pass the mirror test are conscious, self aware, and capable of suffering. Most animals don’t pass this test.
On the other hand, consider the fact that you can have a “pain” response without actually being conscious of it. It happens all the time. If you touch the hot stove, you reflexively withdraw your hand before the nerve impulse has time to even reach your brain. The spinal cord does the processing. I’m not ready to call my spinal cord conscious, are you? (If you want to go down that route, how do you know rocks aren’t conscious?) The nervous systems of many species are simpler than that. I don’t believe jellyfish are conscious. Just because an animal reacts to a “pain” signal, doesn’t mean it actually hurts. This is true even of humans.
There are many cases in between these extremes. I don’t know which animals are conscious in these cases. But that doesn’t mean they can suffer like humans do. Humans have a lot of willpower. They can override their basic instincts to an astonishing degree using their frontal lobes. Therefore, instincts and emotions may have to have evolved to be much stronger in humans than in other conscious animals to compensate. Where a human must experience an overwhelming urge, an animal may only need a mild preference to act. Animals that do suffer may suffer much less than one might think.
Net utility according to what function? Presumably pleasure—pain, right? As people have pointed out, this is not at all the utility function animals (including humans) actually use to make choices. It seems relevant to you presumably because the idea has some aesthetic appeal to you, not because God wrote it on a stone tablet or anything.
I think once people recognize that questions that seem to be about “objective morality” are really usually questions about their own moral preferences, they tend to abandon system-building in favor of self-knowledge.
I find the idea that net utility among people in our world today is negitive to be an extremly implausible one. Even people who have brief periods of time where their utility is so far negitive and attempt suicide (but survive) generally regret it later, as that kind of deep depression is usually something that passes pretty quickly and before long they find themselves back in a positive-utility state regretting that they almost ended their life. Most people overall enjoy life, I think, and generally want to extend it for as long as possible.
Life can be hard, and it’s certanly nowhere close to optimal (yet?) but overall it’s definitely a positive utility state. So much so, that the possibility of losing life (fear, danger, illness, ect) is itself one of the most utility-lowering things that can happen to a person.
As for the other point; when comparing two possibile futures you also have to consider the possibility of either one happening. The only way you could get a future with an AI torturing vast numbers of people forever is if someone specifically designed it to do that. That’s not impossible, but it’s far, far more likely that a person would want to create an AI that would make large numbers of people happy, either altruistically, or for economic or political reasons, or as a cooperative effort between people, ect. (And of course it’s also possible that we fail and the AI just kills everyone, or that super-AI ends up not happening or not being as important as we think ect.) But just looking at those two probabilities, “AI-tortures-everyone” vs “AI-makes-people-happy”, if the second one is far more likely then the first one, then you need to give that greater utility weight.
Perhaps I was a bit misleading, but when I said the net utility of the Earth may be negative, I had in mind mostly fish and other animals that can feel pain. That was what Singer was talking about in the beginning essays. I am fairly certain net utility of humans is positive.
If you think that (1) net utility of humans is positive and (2) net utility of all animals is negative, and you are minded to try to deal with this by mass-killing, why would you then propose wiping out all animals including humans rather than wiping out all animals other than humans? Or even some more carefully targetted option like wiping out a subset of animals, chosen to improve that negative net utility as much as possible?
[EDITED to fix screwed-up formatting]
Wow, that had for some reason never crossed my mind. That’s probably a very bad sign.
Honestly, it probably is. :) Not a bad sign as in you are a bad person, but bad sign as in this is an attractor space of Bad Thought Experiments that rationalist-identifying people seem to keep falling into because they’re interesting.
I like your plan better, gjm. Mass biocide must wait until after we’re no longer so dependent on the biosphere and we can properly target interventions. This is probably a post-singularity question.
Ok, that’s possible. I still don’t think it’s that likely, though. In general, at least from my limited experience with animals, most of them are pretty “happy/ content” most of the time (as much as that word can apply to most animals, so take it with a grain of salt), so long as they aren’t starving and aren’t in serious pain right at that moment in time. They do have other emotional responses, like anger or fear or pain, but those are only things that happen in special conditions.
I think that’s how evolution designed most animals; they’re really only under “stress” a small percentage of the time, and an animal under “stress” 24⁄7 (like, say, an animal in an unhappy state of captivity) often develops health problems very quickly because that’s not a natural state for them.
This is probably more true of some animals than others. From what I’ve read, most baboons and hyenas (for example) are pretty miserable because of their social structures. I remember reading about a case where the dominant members of a baboon troop died of disease and their culture shifted because of it. The surviving baboons were much happier.
Nature (evolution) literally invented pain in the first place, and it’s under no obligation to turn it off when it doesn’t impact genetic fitness. Elephants pass the mirror test. That’s very strong evidence that they’re conscious and self-aware. Yet they slowly starve to death once they’ve run out of teeth.
Oh, there is a lot of suffering in nature, no question. The world, as it evolved, isn’t anywhere close to optimal, for anything.
I do think it’s highly unlikely that net utility for your average animal over the course of it’s lifetime is going to be negitive, though. The “default state” of an animal when it is not under stress does not seem to be an unhappy one, in general.