We’re treading close to terminal values here. I will express some aesthetic preference for nature qua nature. However I also recognize a libertarian attitude that we should allow other individuals to live the lives they choose in the environments they find themselves to the extent reasonably possible, and I see no justification for anthropocentric limits on such a preference.
Absent strong reasons otherwise, “do no harm” and “careful, limited action” should be the default position. The best we can do for animals that don’t have several millennia of adaptation to human companionship (i.e. not dogs, cats, and horses) is to leave them alone and not destroy their natural habitat. Where we have destroyed it, attempt to restore it as best we can, or protect what remains. Focus on the species, not the individual. We have neither the knowledge nor the will to protect individual, non-pet animals.
When you ask, “Assuming that these environments are (or would be) on the whole substantially better on the measures that matter to the individual living in them, why shouldn’t we?” it’s not clear to me whether you’re referring to why we shouldn’t move humans into virtual boxes or why we shouldn’t move animals into virtual boxes, or both. If you’re talking about humans, the answer is because we don’t get to make that choice for other humans. I for one have no desire to live my life in Nozick box, and will oppose anyone who tries to put me in one while I’m still capable of living a normal life. If you’re referring to animals, the argument is similar though more indirect. Ultimately humans should not take it upon themselves to decide how another species lives. The burden of proof rests on those who wish to tamper with nature, not those who wish to leave it alone.
We’re treading close to terminal values here. I will express some aesthetic preference for nature qua nature.
That strikes me as inconsistent, assuming that preventing suffering/minimizing disutility is also a terminal value. In those terms, nature is bad. Really, really bad.
I also recognize a libertarian attitude that we should allow other individuals to live the lives they choose in the environments they find themselves to the extent reasonably possible.
It seems arbitrary to exclude the environment from the cluster of factors that go into living “the lives they choose.” I choose to not live in a hostile environment where things much larger than me are trying to flay me alive, and I don’t think it’s too much of a stretch to assume that most other conscious beings would choose the same if they knew they had the option.
Absent strong reasons otherwise, “do no harm” and “careful, limited action” should be the default position. The best we can do for animals that don’t have several millennia of adaptation to human companionship (i.e. not dogs, cats, and horses) is to leave them alone and not destroy their natural habitat.
Taken with this...
We need to value the species as a whole, not just the individual members; and we need to value their inherent nature as predators and prey.
...it seems like you don’t really have a problem with animal suffering, as long as human beings aren’t the ones causing it. But the gazelle doesn’t really care whether she’s being chased down by a bowhunter or a lion, although she might arguably prefer that the human kill her if she knew what was in store for her from the lion.
I still don’t know why you think we ought to value predators’ “inherent nature” as predators or treat entire species as more important than their constituent individuals. My follow-up questions would be:
(1) If there were a species of animal who fed on the chemicals produced from intense, prolonged suffering and fear, would we be right to value its “inherent nature” as a torturer? Would it not be justifiable to either destroy it or alter it sufficiently that it didn’t need to torture other creatures to eat?
(2) What is the value in keeping any given species in existence, assuming that its disappearance would have an immense positive effect on the other conscious beings in its environment? Why is having n species necessarily better than having n-1? Presumably, you wouldn’t want to add the torture-predators in the question above to our ecosystem—but if they were already here, would you want them to continue existing? Are worlds in which they exist somehow better than ours?
We have neither the knowledge nor the will to protect individual, non-pet animals.
We certainly know enough to be able to cure their most common ailments, ease their physical pain, and prevent them from dying from the sort of injuries and illnesses that would finish them off in their natural environments. Our knowledge isn’t perfect, but it’s a stretch to say we don’t have “the knowledge to protect” them. I suspect that our will to do so is constrained by the scope of the problem. “Fixing nature” is too big a task to wrap our heads around—for now. That might not always be the case.
When you ask, “Assuming that these environments are (or would be) on the whole substantially better on the measures that matter to the individual living in them, why shouldn’t we?” it’s not clear to me whether you’re referring to why we shouldn’t move humans into virtual boxes or why we shouldn’t move animals into virtual boxes, or both.
Both.
If you’re talking about humans, the answer is because we don’t get to make that choice for other humans. I for one have no desire to live my life in Nozick box, and will oppose anyone who tries to put me in one while I’m still capable of living a normal life.
Then that environment wouldn’t be better on the measures that matter to you, although I suspect that there is some plausible virtual box sufficiently better on the other measures that you would prefer it to the box you live in now. I have a hard time understanding what is so unappealing about a virtual world versus the “real one.”
If you’re referring to animals, the argument is similar though more indirect. Ultimately humans should not take it upon themselves to decide how another species lives.
This suggests to me that you haven’t really internalized exactly how bad it is to be chased down by something that wants to pin you down and eat parts of you away until you finally die.
The burden of proof rests on those who wish to tamper with nature, not those who wish to leave it alone.
An example of the importance of predators I happened across recently:
Mounting evidence indicates that there are cascading ecological effects when top-level predators decline. A recent investigation looked at four reef systems in the Pacific Islands, ranging from hosting a robust shark population to having few, if any, because of overfishing. Where sharks were abundant, other fish and coral thrived. When they were absent, algae choked the reef nearly to death and biodiversity plummetted.
Overfishing sharks, such as the bullk, great white, and hammerhead, aloing the Atlantic Coast has led to an explosion of the rays, skates, and small sharks they eat, another study found. Some of these creatures, in turn, are devouring shellfish and possibly tearing up seagrass while they forage, destroying feeding grounds for birds and nurseries for fish.
To have healthy populations of healthy seabirds and shorebirds, we need a healthy marine environment,” says Mike Sutton, Audubon California executive director and a Shark-Friendly Marina Intiative board member. “We’re not goping to have that without sharks.”
“Safer Waters”, Alisa Opar, Audubon, July-August 2013, p. 52
This is just one example of the importance of top-level predators for everything in the ecosystem. Nature is complex and interconnected. If you eliminate some species because you think they’re mean, you’re going to damage a lot more.
This is an excellent example of how it’s a bad idea to mess with ecosystems without really knowing what you’re doing. Ideally, any intervention should be tested on some trustworthy (ie. more-or-less complete, and experimentally verified) ecological simulations to make sure it won’t have any catastrophic effects down the chain.
But of course it would be a mistake to conclude from this that keeping things as they are is inherently good.
If you eliminate some species because you think they’re mean, you’re going to damage a lot more.
I’d just like to point out that (a) “mean” is a very poor descriptor of predation (neither its severity nor its connotations re: motivation do justice to reality), and (b) this use of “damage” relies on the use of “healthy” to describe a population of beings routinely devoured alive well before the end of their natural lifespans. If we “damaged” a previously “healthy” system wherein the same sorts of things were happening to humans, we would almost certainly consider it a good thing.
(b) this use of “damage” relies on the use of “healthy” to describe a population of beings routinely devoured alive well before the end of their natural lifespans.
If “natural lifespans” means what they would have if they weren’t eaten, it’s a tautology. If not, what does it mean? The shark’s “natural” lifespan requires that it eats other creatures. Their “natural” lifespan requires that it does not.
Yes, I’m using “natural lifespan” here as a placeholder for “the typical lifespan assuming nothing is actively trying to kill you.” It’s not great language, but I don’t think it’s obviously tautological.
The shark’s “natural” lifespan requires that it eats other creatures. Their “natural” lifespan requires that it does not.
Yes. My question is whether that’s a system that works for us.
We can say, “Evil sharks!” but I don’t feel any need to either exterminate all predators from the world, nor to modify them to graze on kelp. Yes, there’s a monumental amount of animal suffering in the ordinary course of things, even apart from humans. Maybe there wouldn’t be in a system designed by far future humans from scratch. But radically changing the one we live in when we hardly know how it all works—witness the quoted results of overfishing shark—strikes me as quixotic folly.
It strikes me as folly, too. But “Let’s go kill the sharks, then!” does not necessarily follow from “Predation is not anywhere close to optimal.” Nowhere have I (or anyone else here, unless I’m mistaken) argued that we should play with massive ecosystems now.
I’m very curious why you don’t feel any need to exterminate or modify predators, assuming it’s likely to be something we can do in the future with some degree of caution and precision.
I’m very curious why you don’t feel any need to exterminate or modify predators, assuming it’s likely to be something we can do in the future with some degree of caution and precision.
That sort of intervention is too far in the future for me to consider it worth thinking about. People of the future can take care of it then. That applies even if I’m one of those people of the far future (not that I expect to be). Future-me can deal with it, present-me doesn’t care or need to care what future-me decides.
In contrast, smallpox, tuberculosis, cholera, and the like are worth exterminating now, because (a) unlike the beautiful big fierce animals, they’re no loss in themselves, (b) it doesn’t appear that their loss will disrupt any ecosystems we want to keep, and (c) we actually can do it here and now.
There’s something about this sort of philosophy that I’ve wondered about for a while.
Do you think that deriving utility from the suffering of others (or, less directly, from activities that necessarily involve the suffering of others) is a valid value? Or is it intrinsically invalid?
That is, if we were in a position to reshape all of reality according to our whim, and decided to satisfy the values of all morally relevant beings, would we also want to satisfy the values of beings that derive pleasure/utility from the suffering of others, assuming we could do so without actually inflicting disutility/pain on any other beings?
And more concretely: in a “we are now omnipotent gods” scenario where we could, if we wanted to, create for sharks an environment where they could eat fish to their hearts’ content (and these would of course be artificial fish without any actual capacity for suffering, unbeknownst to the sharks) — would we do so?
Or would we judge the sharks’ pleasure from eating fish to be an invalid value, and simply modify them to not be predators?
The shark question is perhaps a bit esoteric; but if we substitute “psychopaths” or “serial killers” for “sharks”, it might well become relevant at some future date.
I’m not sure what you mean by “valid” here—could you clarify? I will say that I think a world where beings are deriving utility from the perception of causing suffering without actually causing suffering isn’t inferior to a world where beings are deriving the same amount of utility from some other activity that doesn’t affect other beings, all else held equal. However, it seems like it might be difficult to maintain enough control over the system to ensure that the pro-suffering beings don’t do anything that actually causes suffering.
I’m not sure what you mean by “valid” here—could you clarify?
Sure. By “valid” I mean something like “worth preserving”, or “to be endorsed as a part of the complex set of values that make up human-values-in-general”.
In other words, in the scenario where we’re effectively omnipotent (for this purpose, at least), and have decided that we’re going to go ahead and satisfy the values of all morally relevant beings — are we going to exclude some values? Or exclude some beings on the basis of their values? For example: should we, in such a scenario, say: “we’ll satisfy the values of all the humans, except the psychopaths/sharks/whoever; we don’t find their values to be worth satisfying, so they’re going to be excluded from this”?
I would guess, for instance, that few people here would say: yeah, along with satisfying the values of all humans, let’s also satisfy the values of all the paperclip maximizers. We don’t find paperclip maximization to be a valid value, in that sense.
So my question to you is where you stand on all of that. Are there invalid values? Would you, in fact, try to satisfy Clippy’s values as well as those of humans? If not, how about sharks? Psychopaths? Etc.?
I will say that I think a world where beings are deriving utility from the perception of causing suffering without actually causing suffering isn’t inferior to a world where beings are deriving the same amount of utility from some other activity that doesn’t affect other beings, all else held equal.
Ok. Actually, I could take that as an answer to at least some of my above questions, but if you want to expand a bit on what I ask in this post, that would be cool.
However, it seems like it might be difficult to maintain enough control over the system to ensure that the pro-suffering beings don’t do anything that actually causes suffering.
Well, sure. But let’s keep this in the least convenient possible world, where such non-fundamental issues are somehow dealt with.
There’s a lot here, and I will try to address some specific points later. For now, I will say that personally I do not espouse utilitarianism for several reasons, so if you find me inconsistent with utilitarianism, no surprise there. Nor do I accept the complete elimination of all suffering and maximization of pleasure as a terminal value. I do not want to live, and don’t think most other people want to live, in a matrix world where we’re all drugged to our gills with maximal levels of L-dopamine and fed through tubes.
Eliminating torture, starvation, deprivation, deadly disease, and extreme poverty is good; but that’s not the same thing as saying we should never stub our toe, feel some hunger pangs before lunch, play a rough game of hockey, or take a risk climbing a mountain. The world of pure pleasure and no pain, struggle, or effort is a dystopia, not a utopia, at least in my view.
I suspect that giving any one single principle exclusive value is likely a path to a boring world tiled in paperclips. It is precisely the interaction among conflicting values and competing entities that makes the world interesting, fun, and worth living in. There is no single principle, not even maximizing pleasure and minimizing pain, that does not lead to dystopia when it is taken to its logical extreme and all other competing principles are thrown out. We are complicated and contradictory beings, and we need to embrace that complexity; not attempt to smooth it out.
Elharo, which is more interesting? Wireheading—or “the interaction among conflicting values and competing entities that makes the world interesting, fun, and worth living”? Yes, I agree, the latter certainly sounds more exciting; but “from the inside”, quite the reverse. Wireheading is always enthralling, whereas everyday life is often humdrum. Likewise with so-called utilitronium. To humans, utilitronium sounds unimaginably dull and monotonous, but “from the inside” it presumably feels sublime.
However, we don’t need to choose between aiming for a utilitronium shockwave and conserving the status quo. The point of recalibrating our hedonic treadmill is that life can be fabulously richer - in principle orders of magnitude richer—for everyone without being any less diverse, and without forcing us to give up our existing values and preference architectures. (cf. “The catechol-O-methyl transferase Val158Met polymorphism and experience of reward in the flow of daily life.”: http://www.ncbi.nlm.nih.gov/pubmed/17687265) In principle, there is nothing to stop benign (super)intelligence from spreading such reward pathway enhancements across the phylogenetic tree.
We’re treading close to terminal values here. I will express some aesthetic preference for nature qua nature. However I also recognize a libertarian attitude that we should allow other individuals to live the lives they choose in the environments they find themselves to the extent reasonably possible, and I see no justification for anthropocentric limits on such a preference.
Absent strong reasons otherwise, “do no harm” and “careful, limited action” should be the default position. The best we can do for animals that don’t have several millennia of adaptation to human companionship (i.e. not dogs, cats, and horses) is to leave them alone and not destroy their natural habitat. Where we have destroyed it, attempt to restore it as best we can, or protect what remains. Focus on the species, not the individual. We have neither the knowledge nor the will to protect individual, non-pet animals.
When you ask, “Assuming that these environments are (or would be) on the whole substantially better on the measures that matter to the individual living in them, why shouldn’t we?” it’s not clear to me whether you’re referring to why we shouldn’t move humans into virtual boxes or why we shouldn’t move animals into virtual boxes, or both. If you’re talking about humans, the answer is because we don’t get to make that choice for other humans. I for one have no desire to live my life in Nozick box, and will oppose anyone who tries to put me in one while I’m still capable of living a normal life. If you’re referring to animals, the argument is similar though more indirect. Ultimately humans should not take it upon themselves to decide how another species lives. The burden of proof rests on those who wish to tamper with nature, not those who wish to leave it alone.
That strikes me as inconsistent, assuming that preventing suffering/minimizing disutility is also a terminal value. In those terms, nature is bad. Really, really bad.
It seems arbitrary to exclude the environment from the cluster of factors that go into living “the lives they choose.” I choose to not live in a hostile environment where things much larger than me are trying to flay me alive, and I don’t think it’s too much of a stretch to assume that most other conscious beings would choose the same if they knew they had the option.
Taken with this...
...it seems like you don’t really have a problem with animal suffering, as long as human beings aren’t the ones causing it. But the gazelle doesn’t really care whether she’s being chased down by a bowhunter or a lion, although she might arguably prefer that the human kill her if she knew what was in store for her from the lion.
I still don’t know why you think we ought to value predators’ “inherent nature” as predators or treat entire species as more important than their constituent individuals. My follow-up questions would be:
(1) If there were a species of animal who fed on the chemicals produced from intense, prolonged suffering and fear, would we be right to value its “inherent nature” as a torturer? Would it not be justifiable to either destroy it or alter it sufficiently that it didn’t need to torture other creatures to eat?
(2) What is the value in keeping any given species in existence, assuming that its disappearance would have an immense positive effect on the other conscious beings in its environment? Why is having n species necessarily better than having n-1? Presumably, you wouldn’t want to add the torture-predators in the question above to our ecosystem—but if they were already here, would you want them to continue existing? Are worlds in which they exist somehow better than ours?
We certainly know enough to be able to cure their most common ailments, ease their physical pain, and prevent them from dying from the sort of injuries and illnesses that would finish them off in their natural environments. Our knowledge isn’t perfect, but it’s a stretch to say we don’t have “the knowledge to protect” them. I suspect that our will to do so is constrained by the scope of the problem. “Fixing nature” is too big a task to wrap our heads around—for now. That might not always be the case.
Both.
Then that environment wouldn’t be better on the measures that matter to you, although I suspect that there is some plausible virtual box sufficiently better on the other measures that you would prefer it to the box you live in now. I have a hard time understanding what is so unappealing about a virtual world versus the “real one.”
This suggests to me that you haven’t really internalized exactly how bad it is to be chased down by something that wants to pin you down and eat parts of you away until you finally die.
To prove what?
Two values being in conflict isn’t necessarily inconsistent, it just mean that you have to make trade-offs.
An example of the importance of predators I happened across recently:
“Safer Waters”, Alisa Opar, Audubon, July-August 2013, p. 52
This is just one example of the importance of top-level predators for everything in the ecosystem. Nature is complex and interconnected. If you eliminate some species because you think they’re mean, you’re going to damage a lot more.
This is an excellent example of how it’s a bad idea to mess with ecosystems without really knowing what you’re doing. Ideally, any intervention should be tested on some trustworthy (ie. more-or-less complete, and experimentally verified) ecological simulations to make sure it won’t have any catastrophic effects down the chain.
But of course it would be a mistake to conclude from this that keeping things as they are is inherently good.
I’d just like to point out that (a) “mean” is a very poor descriptor of predation (neither its severity nor its connotations re: motivation do justice to reality), and (b) this use of “damage” relies on the use of “healthy” to describe a population of beings routinely devoured alive well before the end of their natural lifespans. If we “damaged” a previously “healthy” system wherein the same sorts of things were happening to humans, we would almost certainly consider it a good thing.
If “natural lifespans” means what they would have if they weren’t eaten, it’s a tautology. If not, what does it mean? The shark’s “natural” lifespan requires that it eats other creatures. Their “natural” lifespan requires that it does not.
Yes, I’m using “natural lifespan” here as a placeholder for “the typical lifespan assuming nothing is actively trying to kill you.” It’s not great language, but I don’t think it’s obviously tautological.
Yes. My question is whether that’s a system that works for us.
We can say, “Evil sharks!” but I don’t feel any need to either exterminate all predators from the world, nor to modify them to graze on kelp. Yes, there’s a monumental amount of animal suffering in the ordinary course of things, even apart from humans. Maybe there wouldn’t be in a system designed by far future humans from scratch. But radically changing the one we live in when we hardly know how it all works—witness the quoted results of overfishing shark—strikes me as quixotic folly.
It strikes me as folly, too. But “Let’s go kill the sharks, then!” does not necessarily follow from “Predation is not anywhere close to optimal.” Nowhere have I (or anyone else here, unless I’m mistaken) argued that we should play with massive ecosystems now.
I’m very curious why you don’t feel any need to exterminate or modify predators, assuming it’s likely to be something we can do in the future with some degree of caution and precision.
That sort of intervention is too far in the future for me to consider it worth thinking about. People of the future can take care of it then. That applies even if I’m one of those people of the far future (not that I expect to be). Future-me can deal with it, present-me doesn’t care or need to care what future-me decides.
In contrast, smallpox, tuberculosis, cholera, and the like are worth exterminating now, because (a) unlike the beautiful big fierce animals, they’re no loss in themselves, (b) it doesn’t appear that their loss will disrupt any ecosystems we want to keep, and (c) we actually can do it here and now.
There’s something about this sort of philosophy that I’ve wondered about for a while.
Do you think that deriving utility from the suffering of others (or, less directly, from activities that necessarily involve the suffering of others) is a valid value? Or is it intrinsically invalid?
That is, if we were in a position to reshape all of reality according to our whim, and decided to satisfy the values of all morally relevant beings, would we also want to satisfy the values of beings that derive pleasure/utility from the suffering of others, assuming we could do so without actually inflicting disutility/pain on any other beings?
And more concretely: in a “we are now omnipotent gods” scenario where we could, if we wanted to, create for sharks an environment where they could eat fish to their hearts’ content (and these would of course be artificial fish without any actual capacity for suffering, unbeknownst to the sharks) — would we do so?
Or would we judge the sharks’ pleasure from eating fish to be an invalid value, and simply modify them to not be predators?
The shark question is perhaps a bit esoteric; but if we substitute “psychopaths” or “serial killers” for “sharks”, it might well become relevant at some future date.
I’m not sure what you mean by “valid” here—could you clarify? I will say that I think a world where beings are deriving utility from the perception of causing suffering without actually causing suffering isn’t inferior to a world where beings are deriving the same amount of utility from some other activity that doesn’t affect other beings, all else held equal. However, it seems like it might be difficult to maintain enough control over the system to ensure that the pro-suffering beings don’t do anything that actually causes suffering.
Sure. By “valid” I mean something like “worth preserving”, or “to be endorsed as a part of the complex set of values that make up human-values-in-general”.
In other words, in the scenario where we’re effectively omnipotent (for this purpose, at least), and have decided that we’re going to go ahead and satisfy the values of all morally relevant beings — are we going to exclude some values? Or exclude some beings on the basis of their values? For example: should we, in such a scenario, say: “we’ll satisfy the values of all the humans, except the psychopaths/sharks/whoever; we don’t find their values to be worth satisfying, so they’re going to be excluded from this”?
I would guess, for instance, that few people here would say: yeah, along with satisfying the values of all humans, let’s also satisfy the values of all the paperclip maximizers. We don’t find paperclip maximization to be a valid value, in that sense.
So my question to you is where you stand on all of that. Are there invalid values? Would you, in fact, try to satisfy Clippy’s values as well as those of humans? If not, how about sharks? Psychopaths? Etc.?
Ok. Actually, I could take that as an answer to at least some of my above questions, but if you want to expand a bit on what I ask in this post, that would be cool.
Well, sure. But let’s keep this in the least convenient possible world, where such non-fundamental issues are somehow dealt with.
There’s a lot here, and I will try to address some specific points later. For now, I will say that personally I do not espouse utilitarianism for several reasons, so if you find me inconsistent with utilitarianism, no surprise there. Nor do I accept the complete elimination of all suffering and maximization of pleasure as a terminal value. I do not want to live, and don’t think most other people want to live, in a matrix world where we’re all drugged to our gills with maximal levels of L-dopamine and fed through tubes.
Eliminating torture, starvation, deprivation, deadly disease, and extreme poverty is good; but that’s not the same thing as saying we should never stub our toe, feel some hunger pangs before lunch, play a rough game of hockey, or take a risk climbing a mountain. The world of pure pleasure and no pain, struggle, or effort is a dystopia, not a utopia, at least in my view.
I suspect that giving any one single principle exclusive value is likely a path to a boring world tiled in paperclips. It is precisely the interaction among conflicting values and competing entities that makes the world interesting, fun, and worth living in. There is no single principle, not even maximizing pleasure and minimizing pain, that does not lead to dystopia when it is taken to its logical extreme and all other competing principles are thrown out. We are complicated and contradictory beings, and we need to embrace that complexity; not attempt to smooth it out.
Elharo, which is more interesting? Wireheading—or “the interaction among conflicting values and competing entities that makes the world interesting, fun, and worth living”? Yes, I agree, the latter certainly sounds more exciting; but “from the inside”, quite the reverse. Wireheading is always enthralling, whereas everyday life is often humdrum. Likewise with so-called utilitronium. To humans, utilitronium sounds unimaginably dull and monotonous, but “from the inside” it presumably feels sublime.
However, we don’t need to choose between aiming for a utilitronium shockwave and conserving the status quo. The point of recalibrating our hedonic treadmill is that life can be fabulously richer - in principle orders of magnitude richer—for everyone without being any less diverse, and without forcing us to give up our existing values and preference architectures. (cf. “The catechol-O-methyl transferase Val158Met polymorphism and experience of reward in the flow of daily life.”: http://www.ncbi.nlm.nih.gov/pubmed/17687265) In principle, there is nothing to stop benign (super)intelligence from spreading such reward pathway enhancements across the phylogenetic tree.