Several people have been attempting to reductio my pro-human point of view, so I’ll do the same back to the pro-animal people here: how simple is the simplest animal you’re willing to assign moral worth to? Are you taking into account meta-uncertainty about the moral worth of even very simple animals? (What about living organisms outside of the animal kingdom, like bacteria? Viruses?) If you don’t care about organisms simple enough that they don’t suffer, does it seem “arbitrary” to you to single out a particular mental behavior as being the mental behavior that signifies moral worth? Does it seem “mindist” to you to single out having a particular kind of mind as being the thing that signifies moral worth?
If you calculated that assigning even very small moral worth to a simple but sufficiently numerous organism leads to the conclusion that the moral worth of non-human organisms on Earth strongly outweighs, in aggregate, the moral worth of humans, would you act on it (e.g. by making the world a substantially better place for some bacterium by infecting many other animals, such as humans, with it)?
If you were the only human left on Earth and you couldn’t find enough non-meat to survive on, would you kill yourself to avoid having to hunt to survive?
How do you resolve conflicts among organisms (e.g. predatorial or parasitic relationships)?
how simple is the simplest animal you’re willing to assign moral worth to?
I don’t value animals per se, it is their suffering I care about and want to prevent. If it turns out that even the tiniest animals can suffer, I will take this into consideration. I’m already taking insects or nematodes into consideration probabilistically; I think it is highly unlikely that they are sentient, and I think that even if they are sentient, their suffering might not be as intense as that of mammals, but since their numbers are so huge, the well-being of all those small creatures makes up a non-negligible term in my utility function.
If you don’t care about organisms simple enough that they don’t suffer, does it seem “arbitrary” to you to single out a particular mental behavior as being the mental behavior that signifies moral worth?
No, it seems completely non-arbitrary to me. Only sentient beings have a first-person point of view, only for them can states of the world be good or bad. A stone cannot be harmed in the same way a sentient being can be harmed. Introspectively, my suffering is bad because it is suffering, there is no other reason.
If you calculated that assigning even very small moral worth to a simple but sufficiently numerous organism leads to the conclusion that the moral worth of non-human organisms on Earth strongly outweighs, in aggregate, the moral worth of humans, would you act on it (e.g. by making the world a substantially better place for some bacterium by infecting many other animals, such as humans, with it)?
I don’t care about maximizing the amount of morally relevant entities, so this is an unlikely scenario. But I guess the point of your question is whether I am serious about the criteria I’m endorsing. Yes, I am. If my best estimates come out in a way leading to counterintuitive conclusions, and if that remains the case even if I adjust for overconfidence on my part before doing something irreversible, then I would indeed act accordingly.
If you were the only human left on Earth and you couldn’t find enough non-meat to survive on, would you kill yourself to avoid having to hunt to survive?
The lives of most wild animals involve a lot of suffering already, and at some point, they are likely going to die painfully anyway. It is unclear whether me killing them (assuming I’d even be skilled enough to get one of them) would be net bad. I don’t intrinsically object to beings dying/being killed. But again, if it turns out that some action (e.g. killing myself) is what best fulfills the values I’ve come up with under reflection, I will do that, or, if I’m not mentally capable of doing it, I’d take a pill that would make me capable.
How do you resolve conflicts among organisms (e.g. predatorial or parasitic relationships)?
I don’t know, but I assume that an AI would be able to find a great solution. Maybe through reengineering animals so they become incapable of experiencing suffering, while somehow keeping the function of pain intact. Or maybe simply get rid of Darwinian nature and replace it, if that is deemed necessary, with something artificial and nice.
I’m already taking insects or nematodes into consideration probabilistically; I think it is highly unlikely that they are sentient, and I think that even if they are sentient, their suffering might not be as intense as that of mammals, but since their numbers are so huge, the well-being of all those small creatures makes up a non-negligible term in my utility function.
A priori, it seems that the moral weight of insects would either be dominated by their massive numbers or by their tiny capacities. It’s a narrow space where the two balance and you get a non-negligible but still-not-overwhelming weight for insects in a utility function. How did you decide that this was right?
Having said that, ways on increasing the well being of these may be quite a bit different from increasing it for larger animals. In particular, because they so many of them die so within the first few days of life, their averaged life quality seems like it would be terrible. So reducing the populations looks like the current best option.
There may be good instrumental reasons for focusing on less controversial animals and hoping that they promote the kind of antispeciesism that spills over to concern about insects and does work for improving similar situations in the future.
For what is worth, here are the results of a survey that Vallinder and I circulated recently. 85% of expert respondents, and 89% of LessWrong respondents, believe that there is at least a 1% chance that insects are sentient, and 77% of experts and 69% of LessWrongers believe there is at least a 20% chance that they are sentient.
Yes, my current estimate for that is less than 1%, but this is definitely something I should look into more closely. This has been on my to-do list for quite a while already.
Another thing to consider is that insects are a diverse bunch. I’m virtually certain that some of them aren’t conscious, see for instance this type of behavior. OTOH, cockroaches or bees seem to be much more likely to be sentient.
Can you summarize the properties you look for when making these kinds of estimates of whether an insect is conscious/sentient/etc.? Or do you make these judgments based on more implicit/instinctive inspection?
I mostly do it by thinking about what I would accept as evidence of pain in more complex animals and see if it is present in insects. Complex pain behavior and evolutionary and functional homology relating to pain are things to look for.
There is a quite a bit of research on complex pain behavior in crabs by Robert Elwood. I’d link his site but it doesn’t seem to be up right now. You should be able to find the articles, though. Crabs have 100,000 neurons which is around what many insects have.
Here is a pdf of a paper that find that a bunch of common human mind altering drugs affecting crawfish and fruit flies.
It is quite implicit/instinctive. The problem is that without having solved the problem of consciousness, there is also uncertainty about what you’re even looking for. Nociception seems to be a necessary criterion, but it’s not sufficient. In addition, I suspect that consciousness’ adaptive role has to do with the weighting of different “possible” behaviors, so there has to be some learning behavior or variety in behavioral subroutines.
I actually give some credence to extreme views like Dennett’s (and also Eliezer’s if I’m informed correctly), which state that sentience implies self-awareness, but my confidence for that is not higher than 20%. I read a couple of papers on invertebrate sentience and I adjusted the expert estimates downwards somewhat because I have a strong intuition that many biologists are too eager to attribute sentience to whatever they are studying (also, it is a bit confusing because opinions are all over the place). Brian Tomasik lists some interesting quotes and material here.
And regarding the number of neurons thing, there I’m basically just going by intuition, which is unfortunate so I should think about this some more.
Ice9, perhaps consider uncontrollable panic. Some of the most intense forms of sentience that humans undergo seem to be associated with a breakdown of meta-cognitive capacity. So let’s hope that what it’s like to be an asphyxiating fish, for example, doesn’t remotely resemble what it feels like to be a waterboarded human. I worry that our intuitive dimmer-switch model of consciousness, i.e. more intelligent = more sentient, may turn out to be mistaken.
Good point, there is reason to expect that I’m just assigning numbers in a way that makes the result come out convenient. Last time I did a very rough estimate, the expected suffering of insects and nematodes (given my subjective probabilities) came out around half the expected suffering of all decapodes/amphibians-and-larger wild animals. And then wild animals outnumber farm animals by around 2-3 orders of magnitude in terms of expected suffering, and farm animals outnumber humans by a large margin too. So if I just cared about current suffering, or suffering on earth only, then “non-negligible” would indeed be an understatement for insect suffering.
However, what worries me most is not the suffering that is happening on earth. If space colonization goes wrong or even non-optimal, the current amount of suffering could be multiplied by orders of magnitude. And this might happen even if our values will improve. Consider the case with farmed animals, humans probably never cared as much for the welfare of animals as they do now, but at the same time, we have never caused as much direct suffering to animals as we do now. If you’re primarily care about reducing the absolute amount of suffering, then whatever lets the amount of sentience skyrocket is a priori very dangerous.
Only sentient beings have a first-person point of view, only for them can states of the world be good or bad.
Is the blue-minimizing robot suffering if it sees a lot of blue? Would you want to help alleviate that suffering by recoloring blue things so that they are no longer blue?
I don’t see the relevance of this question, but judging by the upvotes it received, it seems that I’m missing something.
I think suffering is suffering, no matter the substrate it is based on. Whether such a robot would be sentient is an empirical question (in my view anyway, it has recently come to my attention that some people disagree with this). Once we solve the problem of consciousness, it will turn out that such a robot is either conscious or that it isn’t. If it is conscious, I will try to reduce its suffering. If the only way to do that would involve doing “weird” things, I would do weird things.
The relevance is that my moral intuitions suggest that the blue-minimizing robot is morally irrelevant. But if you’re willing to bite the bullet here, then at least you’re being consistent (although I’m no longer sure that consistency is such a great property of a moral system for humans).
1) I am okay with humanely raised farm meat (I found a local butcher shop that sources from farms I consider ethical)
2) If I didn’t have access to civilization, I would probably end up hunting to survive, although I’d try to do so as rarely and humanely as was possible given my circumstances. (I’m only like 5% altruist, I just try to direct that altruism as effectively as possible and if push comes to shove I’m a primal animal that needs to eat. I’m skeptical of people who claim otherwise)
3) I’m currently okay with eating insects, mussels, and similar simplish animals, where I can make pretty good guesses about the lack of sentience of. (If insects do turn out to have sentience, that’s a pretty inconvenient world to have to live in, morally.)
4) I’m approximately average-preference-utilitarian. I value there being more creatures with more complex and interesting capacities for preference satisfaction (this is arbitrary and I’m fine with that). If I had to choose between humans and animals, I’d choose humans. But that’s not the choice offered to humans RE vegetarianism—what’s at stake is not humanity and complex relationships/art/intellectual-endeavors—it’s pretty straightforward pleasure (of a sort that I’m expect large swaths of the animal kingdom to be capable of experiencing—visceral enjoyment of food almost certainly evolved fairly early. You are not exercising any special human-ness to experience it)
Most people don’t need meat (or much of it) to be productive (the amount most people think they need is pretty grossly wrong), and the amount of hedonic satisfaction you’re getting from eating meat is vastly dwarfed by the anti-hedons that enabled it.
5) Ultimately, what I actually advocate is making the best decisions you can, given your circumstances. This includes trading off the willpower and energy you spend on Vegetarianism vs other ways you might be reducing suffering or increasing pleasure/joy/complex-beauty. I wouldn’t push too hard for an effective altruist to be Vegetarian. If you argue that devoting your “give a shit” energy is better spent on fighting poverty or injustice or preventing the destruction of the world by unfriendly AI, I won’t argue with you.
But I’d like people to at least have animal suffering on the radar of “things I’d like to give a shit about, if I had the energy, and that if it became much more convenient to care about, I’d make small modifications to my lifestyle.” So that when in-vitro meat becomes cheap and tasty, I think people should make the initial effort to switch over. (Possibly even while it’s still a bit more expensive). Meanwhile, humanely-raised meat tends to be tastier (it’s overall higher quality) so if you have leftover budget for nicer food in the first place, I’d consider that.
I don’t know how to resolve things like “the ecosystem is full of terribleness”. It is possible than plans that include “destroy all natural ecosystems” will turn out to be correct, but my prior on any given person deciding correctly to do that and execute on it without making lots of things worse is low.
But I’d like people to at least have animal suffering on the radar of “things I’d like to give a shit about, if I had the energy, and that if it became much more convenient to care about, I’d make small modifications to my lifestyle.” So that when in-vitro meat becomes cheap and tasty, I think people should make the initial effort to switch over. (Possibly even while it’s still a bit more expensive).
This is pretty much the case for me. I was vegetarian for a while in high school–oddly enough, less for reducing-suffering ethical reasons than for “it costs fewer resources to produce enough plants to feed the world population than to produce enough meat, as animals have to be fed plants and are a low-efficiency conversion of plant calories, so in order to better use the planet’s resources, everyone should eat more plants and less meat.” I consistently ended up with low iron and B12. It’s possible to get enough iron, B12, and protein as a vegetarian, but you do have to plan your meals a bit more carefully (i.e. always have beans with rice so you get complete protein) and possibly eat foods that you don’t like as much. Right now I cook about one dish with meat in it per week, and I haven’t had any iron or B12 deficiency problems since graduating high school 4 years ago.
In general, I optimize food for low cost as well as health value and ethics, but if in-vitro meat became available, I think this is valuable enough in the long run that I would be willing to “subsidize” its production and commercialization by paying higher prices.
I was vegetarian for a while in high school–oddly enough, less for reducing-suffering ethical reasons than for “it costs fewer resources to produce enough plants to feed the world population than to produce enough meat, as animals have to be fed plants and are a low-efficiency conversion of plant calories, so in order to better use the planet’s resources, everyone should eat more plants and less meat.”
Oddly, this sentence is more or less exactly true for me as well. Only on LessWrong...
Well, considering the existence of healthy vegetarians, it seems clear that we evolved to be at least capable of surviving in a low-meat environment.
I don’t have any sources or anything, and I’m pretty lazy, but I’ve been vegetarian since childhood, and never had any health problems as a result AFAICT.
I am entirely willing to take your word on this, but you know what they say about “anecdote” and declensions thereof. In this case specifically, one of the few things that seem to be reliably true about nutrition is that “people are different, and what works for some may fail or be outright disastrous for others”.
In any case, Raemon seemed to be making a weaker claim than “vegetarianism has no serious health downsides”. “Healthy portions of meat amount to far less than the 32 oz steak a day implied by some anti-vegetarian doomsayers” is something I’m completely willing to grant.
Considering the existence of healthy vegetarians, it seems clear that we evolved to be at least capable of surviving in a low-meat environment supported by modern agriculture that produces large quantities of concentrated non-meat protein in the form of tofu, eggs, whey protein, beans, and the like. This may be a happy accident. Are there any vegetarian hunter-gatherer societies?
I’ve been having a hell of a time finding trustworthy cites on this, possibly because there are so many groups with identity stakes in the matter—obesity researchers and advocates, vegetarians, and paleo diet adherents all have somewhat conflicting interests in ancestral nutrition. That said, this survey paper describes relatively modern hunter-gatherer diets ranging from 1% vegetable (the Nunamiut of Alaska) to 74% vegetable (the Gwi of Africa), with a mean somewhere around one third; no entirely vegetarian hunter-gatherers are described. This one describes societies subsisting on up to 90% gathered food (I don’t know whether or not this is synonymous with “vegetable”), but once again no exclusively vegetarian cultures and a mean around 30%.
I should mention by way of disclaimer that modern forager cultures tend to live in marginal environments and these numbers might not reflect the true ancestral proportions. And, of course, that this has no bearing either way on the ethical dimensions of the subject.
I’m having trouble finding… any kind of dietary information that isn’t obviously politicized (in any direction) right now.
But basically, when people think of a “serving” of meat, they imagine a large hunk of steak, when in fact a serving is more like the size of a deck of cards. A healthy diet has enough things going on in it besides meat that removing meat shouldn’t feel like it’s gutting out your entire source of pleasure from food.
Ah. Yeah, I don’t eat meat in huge chunks or anything. But meat sure is delicious, and comes in a bunch of different formats. Obviously removing meat would not totally turn my diet into a bleak, gray desert of bland gruel; I don’t think anyone would claim that. But it would make it meaningfully less enjoyable, on the whole.
This all seems pretty reasonable (except that I don’t think the validity of a human preference has much to do with how difficult it is for non-humans to have the same preference).
It doesn’t seem like you’re really criticizing “pro-animal people”—you’re just critiquing utilitarianism. (e.g. “Is it arbitrary to state that suffering is bad?” “What if you could help others only at great expense to yourself?”)
Supposing one does accept utilitarian principles, is there any reason why we shouldn’t care about the suffering of non-humans?
This is half a criticism and half a reflection of arguments that have been used against my position that I think are problematic. To the extent that you think these arguments are problematic, I probably agree.
is there any reason why we shouldn’t care about the suffering of non-humans?
Resources spent on alleviating the suffering of non-humans are resources that aren’t spent on alleviating the suffering of humans, which I value a lot more.
That’s a false dichotomy. Resources that stop being spent on alleviating the suffering of non-humans do not automatically translate into resources that are spent on alleviating the suffering of humans. Nor is it the case that there are insufficient resources in the world today to eliminate most human suffering. The issue there is purely one of distribution of wealth, not gross wealth.
Yes, but they’re less available. Maybe I triggered the wrong intuition with the word “resources.” I had in mind resources like the time and energy of intelligent people, not resources like money. I think it’s plausible to guess that time and energy spent on one altruistic cause really does funge directly against time and energy spent on others, e.g. because of good-deed-for-the-day effects.
There is nothing inconsistent about valuing the pain of some animals, but not of others. That said, I find the view hard to believe. When I reflect on why I think pain is bad, it seems clear that my belief is grounded in the phenomenology of pain itself, rather than in any biological or cognitive property of the organism undergoing the painful experience.
Pain is bad because it feels bad. That’s why I think pain should be alleviated irrespective of the species in which it occurs.
Truthfully, I’m not even sure I believe pain is bad in the relevant sense. It’s certainly something I’d prefer to avoid under most circumstances, but when I think about it in detail there always ends up being a “because” in there: because it monopolizes attention, because in sufficient quantity it can thoroughly screw up your motivational and emotional machinery, because it’s often attached to particular actions in a way that limits my ability to do things. It doesn’t feel like a root-level aversion to my reasoning self: when I’ve torn a ligament and can’t flex my foot in a certain way without intense stabbing agony, I’m much more annoyed by the things it prevents me from doing than by the pain it gives me, and indeed I remember the former much better than the latter.
I haven’t thought this through rigorously, but if I had to take a stab at it right now I’d say that pain is bad in roughly the same way that pleasure is good: in other words, it works reasonably well as a rough experiential pointer to the things I actually want to avoid, and it does place certain constraints on the kind of life I’d want to live, but I’d expect trying to ground an entire moral system in it to give me some pretty insane results once I started looking at corner cases.
Probably that fish don’t seem to be hugely different from amphibians/reptiles, birds, and mammals in terms of the six substitute-indicators-for-feeling-pain, and so it’s hard to say whether their pain experience is different.
I would agree that fish pain is less relevant than human pain (they have a central nervous system, yes, but less of one, and a huge part of what makes human pain bad is the psychological suffering associated with it).
My claim was that I don’t care about fish pain, not that fish pain is too different from human pain to matter. Rather, fish are too different from humans to matter.
How is the statement “fish and humans feel pain approximately equally” different from the statement “we should care about fish and human pain approximately equally?”
Most people probably wouldn’t consider that moral as such (though they’d likely be okay with it on pragmatic grounds), but the more general idea of treating some people’s pain as more significant than others’ is certainly consistent with a lot of moral systems. Common privileged categories: friends, relatives, children, the weak or helpless, people not considered evil.
It’s perfectly moral for me to be selfish to some degree, yes. I cannot care about others if I don’t care about myself. You might work differently, but utter unselfishness seems like an anomaly.
“I care about X’s pain” is mostly a statement about X, not a statement about pain. I don’t care about fish and I care about humans. You may not share this moral preference, but are you claiming that you don’t even understand it?
No, I have a lot of biases like this: the halo effect makes me think that humans’ ability to do math makes our suffering more important, “what you see is all there is” allows me to believe that slaughterhouses which operate far away must be morally acceptable, and so forth.
Anyway, fish suffering isn’t a make-or-break decision. People very frequently have the opportunity to choose a bean burrito over a chicken one (or even a beef burrito over a chicken one), and from what Peter has presented here it seems like this is an extremely effective way to reduce suffering.
I may be misunderstanding you, but I thought you were suggesting that there is a non-arbitrary set of physiological features that vertebrates share but fish don’t. I was pointing out that this doesn’t seem to be the case.
how simple is the simplest animal you’re willing to assign moral worth to?
Can’t speak for all vegetarians/pro-animal-rights types, but I personally discount based on complexity (or intelligence of whatever.)
That’s not the same as discounting simpler creatures altogether—at least not when we’re discussing, say, pigs.
(At what point do you draw the line to start valuing creatures, by the way? Chimpanzees? Children? Superintelligent gods? Just curious, this isn’t a reductio.)
I’m not sure what the discount rate is, which is largely why I asked if you were sure about where the line was. I mostly go off intuition for determining how much various species are worth, so if you throw scope insensitivity into the mix...
but I personally discount based on complexity (or intelligence of whatever.)
Would you apply said discount rate intraspecies in addition to interspecies?
By the way. One question I always wanted to ask a pro-animal-rights type: would you support a program for the extinction/reductions of the population of predatory animals on the grounds that they cause large amounts of unnecessary suffering to their prey?
By the way. One question I always wanted to ask a pro-animal-rights type: would you support a program for the extinction/reductions of the population of predatory animals on the grounds that they cause large amounts of unnecessary suffering to their prey?
Yes. Assuming that prey populations are kept from skyrocketing (e.g. through the use of immunocontraception) since that too would result in large amounts of unnecessary suffering.
Eugine, in answer to your question: yes. If we are committed to the well-being of all sentience in our forward light-cone, then we can’t simultaneously conserve predators in their existing guise.
(cf. http://www.abolitionist.com/reprogramming/index.html)
Humans are not obligate carnivores; and the in vitro meat revolution may shortly make this debate redundant; but it’s questionable whether posthuman superintelligence committed to the well-being of all sentience could conserve humans in their existing guise either.
This is, sadly, not a hypothetical question. This is an issue wildlife managers face regularly. For example, do you control the population of Brown-headed Cowbirds in order to maintain or increase the population of Bell’s Vireo or Kirtlands Warbler? The answer is not especially controversial. The only questions are which methods of predator control are most effective, and what unintended side effects might occur. However these are practical, instrumental questions, not moral ones.
Where this comes into play in the public is in the conflict between house cats and birds. In particular, the establishment of feral cat colonies causes conflicts between people who preference non-native, vicious but furry and cute predators and people who preference native, avian, non-pet species. Indeed, this is one of the problems I have with many animal rights groups such as the Humane Society. They’re not pro-animal rights, just pro-pet species rights.
A true concern for animals needs to treat animals as animals, not as furry baby human substitutes. We need to value the species as a whole, not just the individual members; and we need to value their inherent nature as predators and prey. A Capuchin Monkey living in a zoo safe from the threat of Harpy Eagles leads a life as limited and restricted as a human living in Robert Nozick’s Experience Machine. While zoos have their place, we should not seek to move all wild creatures into safe, sterile environments with no predators, pain, or danger any more than we would move all humans into isolated, AI-created virtual environments with no true interaction with reality.
Elharo, I take your point, but surely we do want humans to enjoy healthy lives free from hunger and disease and safe from parasites and predators? Utopian technology promises similar blessings to nonhuman sentients too. Human and nonhuman animals alike typically flourish best when free- living but not “wild”.
We need to value the species as a whole, not just the individual members; and we need to value their inherent nature as predators and prey.
Why?
While zoos have their place, we should not seek to move all wild creatures into safe, sterile environments with no predators, pain, or danger any more than we would move all humans into isolated, AI-created virtual environments with no true interaction with reality.
Assuming that these environments are (or would be) on the whole substantially better on the measures that matter to the individual living in them, why shouldn’t we?
We’re treading close to terminal values here. I will express some aesthetic preference for nature qua nature. However I also recognize a libertarian attitude that we should allow other individuals to live the lives they choose in the environments they find themselves to the extent reasonably possible, and I see no justification for anthropocentric limits on such a preference.
Absent strong reasons otherwise, “do no harm” and “careful, limited action” should be the default position. The best we can do for animals that don’t have several millennia of adaptation to human companionship (i.e. not dogs, cats, and horses) is to leave them alone and not destroy their natural habitat. Where we have destroyed it, attempt to restore it as best we can, or protect what remains. Focus on the species, not the individual. We have neither the knowledge nor the will to protect individual, non-pet animals.
When you ask, “Assuming that these environments are (or would be) on the whole substantially better on the measures that matter to the individual living in them, why shouldn’t we?” it’s not clear to me whether you’re referring to why we shouldn’t move humans into virtual boxes or why we shouldn’t move animals into virtual boxes, or both. If you’re talking about humans, the answer is because we don’t get to make that choice for other humans. I for one have no desire to live my life in Nozick box, and will oppose anyone who tries to put me in one while I’m still capable of living a normal life. If you’re referring to animals, the argument is similar though more indirect. Ultimately humans should not take it upon themselves to decide how another species lives. The burden of proof rests on those who wish to tamper with nature, not those who wish to leave it alone.
We’re treading close to terminal values here. I will express some aesthetic preference for nature qua nature.
That strikes me as inconsistent, assuming that preventing suffering/minimizing disutility is also a terminal value. In those terms, nature is bad. Really, really bad.
I also recognize a libertarian attitude that we should allow other individuals to live the lives they choose in the environments they find themselves to the extent reasonably possible.
It seems arbitrary to exclude the environment from the cluster of factors that go into living “the lives they choose.” I choose to not live in a hostile environment where things much larger than me are trying to flay me alive, and I don’t think it’s too much of a stretch to assume that most other conscious beings would choose the same if they knew they had the option.
Absent strong reasons otherwise, “do no harm” and “careful, limited action” should be the default position. The best we can do for animals that don’t have several millennia of adaptation to human companionship (i.e. not dogs, cats, and horses) is to leave them alone and not destroy their natural habitat.
Taken with this...
We need to value the species as a whole, not just the individual members; and we need to value their inherent nature as predators and prey.
...it seems like you don’t really have a problem with animal suffering, as long as human beings aren’t the ones causing it. But the gazelle doesn’t really care whether she’s being chased down by a bowhunter or a lion, although she might arguably prefer that the human kill her if she knew what was in store for her from the lion.
I still don’t know why you think we ought to value predators’ “inherent nature” as predators or treat entire species as more important than their constituent individuals. My follow-up questions would be:
(1) If there were a species of animal who fed on the chemicals produced from intense, prolonged suffering and fear, would we be right to value its “inherent nature” as a torturer? Would it not be justifiable to either destroy it or alter it sufficiently that it didn’t need to torture other creatures to eat?
(2) What is the value in keeping any given species in existence, assuming that its disappearance would have an immense positive effect on the other conscious beings in its environment? Why is having n species necessarily better than having n-1? Presumably, you wouldn’t want to add the torture-predators in the question above to our ecosystem—but if they were already here, would you want them to continue existing? Are worlds in which they exist somehow better than ours?
We have neither the knowledge nor the will to protect individual, non-pet animals.
We certainly know enough to be able to cure their most common ailments, ease their physical pain, and prevent them from dying from the sort of injuries and illnesses that would finish them off in their natural environments. Our knowledge isn’t perfect, but it’s a stretch to say we don’t have “the knowledge to protect” them. I suspect that our will to do so is constrained by the scope of the problem. “Fixing nature” is too big a task to wrap our heads around—for now. That might not always be the case.
When you ask, “Assuming that these environments are (or would be) on the whole substantially better on the measures that matter to the individual living in them, why shouldn’t we?” it’s not clear to me whether you’re referring to why we shouldn’t move humans into virtual boxes or why we shouldn’t move animals into virtual boxes, or both.
Both.
If you’re talking about humans, the answer is because we don’t get to make that choice for other humans. I for one have no desire to live my life in Nozick box, and will oppose anyone who tries to put me in one while I’m still capable of living a normal life.
Then that environment wouldn’t be better on the measures that matter to you, although I suspect that there is some plausible virtual box sufficiently better on the other measures that you would prefer it to the box you live in now. I have a hard time understanding what is so unappealing about a virtual world versus the “real one.”
If you’re referring to animals, the argument is similar though more indirect. Ultimately humans should not take it upon themselves to decide how another species lives.
This suggests to me that you haven’t really internalized exactly how bad it is to be chased down by something that wants to pin you down and eat parts of you away until you finally die.
The burden of proof rests on those who wish to tamper with nature, not those who wish to leave it alone.
An example of the importance of predators I happened across recently:
Mounting evidence indicates that there are cascading ecological effects when top-level predators decline. A recent investigation looked at four reef systems in the Pacific Islands, ranging from hosting a robust shark population to having few, if any, because of overfishing. Where sharks were abundant, other fish and coral thrived. When they were absent, algae choked the reef nearly to death and biodiversity plummetted.
Overfishing sharks, such as the bullk, great white, and hammerhead, aloing the Atlantic Coast has led to an explosion of the rays, skates, and small sharks they eat, another study found. Some of these creatures, in turn, are devouring shellfish and possibly tearing up seagrass while they forage, destroying feeding grounds for birds and nurseries for fish.
To have healthy populations of healthy seabirds and shorebirds, we need a healthy marine environment,” says Mike Sutton, Audubon California executive director and a Shark-Friendly Marina Intiative board member. “We’re not goping to have that without sharks.”
“Safer Waters”, Alisa Opar, Audubon, July-August 2013, p. 52
This is just one example of the importance of top-level predators for everything in the ecosystem. Nature is complex and interconnected. If you eliminate some species because you think they’re mean, you’re going to damage a lot more.
This is an excellent example of how it’s a bad idea to mess with ecosystems without really knowing what you’re doing. Ideally, any intervention should be tested on some trustworthy (ie. more-or-less complete, and experimentally verified) ecological simulations to make sure it won’t have any catastrophic effects down the chain.
But of course it would be a mistake to conclude from this that keeping things as they are is inherently good.
If you eliminate some species because you think they’re mean, you’re going to damage a lot more.
I’d just like to point out that (a) “mean” is a very poor descriptor of predation (neither its severity nor its connotations re: motivation do justice to reality), and (b) this use of “damage” relies on the use of “healthy” to describe a population of beings routinely devoured alive well before the end of their natural lifespans. If we “damaged” a previously “healthy” system wherein the same sorts of things were happening to humans, we would almost certainly consider it a good thing.
(b) this use of “damage” relies on the use of “healthy” to describe a population of beings routinely devoured alive well before the end of their natural lifespans.
If “natural lifespans” means what they would have if they weren’t eaten, it’s a tautology. If not, what does it mean? The shark’s “natural” lifespan requires that it eats other creatures. Their “natural” lifespan requires that it does not.
Yes, I’m using “natural lifespan” here as a placeholder for “the typical lifespan assuming nothing is actively trying to kill you.” It’s not great language, but I don’t think it’s obviously tautological.
The shark’s “natural” lifespan requires that it eats other creatures. Their “natural” lifespan requires that it does not.
Yes. My question is whether that’s a system that works for us.
We can say, “Evil sharks!” but I don’t feel any need to either exterminate all predators from the world, nor to modify them to graze on kelp. Yes, there’s a monumental amount of animal suffering in the ordinary course of things, even apart from humans. Maybe there wouldn’t be in a system designed by far future humans from scratch. But radically changing the one we live in when we hardly know how it all works—witness the quoted results of overfishing shark—strikes me as quixotic folly.
It strikes me as folly, too. But “Let’s go kill the sharks, then!” does not necessarily follow from “Predation is not anywhere close to optimal.” Nowhere have I (or anyone else here, unless I’m mistaken) argued that we should play with massive ecosystems now.
I’m very curious why you don’t feel any need to exterminate or modify predators, assuming it’s likely to be something we can do in the future with some degree of caution and precision.
I’m very curious why you don’t feel any need to exterminate or modify predators, assuming it’s likely to be something we can do in the future with some degree of caution and precision.
That sort of intervention is too far in the future for me to consider it worth thinking about. People of the future can take care of it then. That applies even if I’m one of those people of the far future (not that I expect to be). Future-me can deal with it, present-me doesn’t care or need to care what future-me decides.
In contrast, smallpox, tuberculosis, cholera, and the like are worth exterminating now, because (a) unlike the beautiful big fierce animals, they’re no loss in themselves, (b) it doesn’t appear that their loss will disrupt any ecosystems we want to keep, and (c) we actually can do it here and now.
There’s something about this sort of philosophy that I’ve wondered about for a while.
Do you think that deriving utility from the suffering of others (or, less directly, from activities that necessarily involve the suffering of others) is a valid value? Or is it intrinsically invalid?
That is, if we were in a position to reshape all of reality according to our whim, and decided to satisfy the values of all morally relevant beings, would we also want to satisfy the values of beings that derive pleasure/utility from the suffering of others, assuming we could do so without actually inflicting disutility/pain on any other beings?
And more concretely: in a “we are now omnipotent gods” scenario where we could, if we wanted to, create for sharks an environment where they could eat fish to their hearts’ content (and these would of course be artificial fish without any actual capacity for suffering, unbeknownst to the sharks) — would we do so?
Or would we judge the sharks’ pleasure from eating fish to be an invalid value, and simply modify them to not be predators?
The shark question is perhaps a bit esoteric; but if we substitute “psychopaths” or “serial killers” for “sharks”, it might well become relevant at some future date.
I’m not sure what you mean by “valid” here—could you clarify? I will say that I think a world where beings are deriving utility from the perception of causing suffering without actually causing suffering isn’t inferior to a world where beings are deriving the same amount of utility from some other activity that doesn’t affect other beings, all else held equal. However, it seems like it might be difficult to maintain enough control over the system to ensure that the pro-suffering beings don’t do anything that actually causes suffering.
I’m not sure what you mean by “valid” here—could you clarify?
Sure. By “valid” I mean something like “worth preserving”, or “to be endorsed as a part of the complex set of values that make up human-values-in-general”.
In other words, in the scenario where we’re effectively omnipotent (for this purpose, at least), and have decided that we’re going to go ahead and satisfy the values of all morally relevant beings — are we going to exclude some values? Or exclude some beings on the basis of their values? For example: should we, in such a scenario, say: “we’ll satisfy the values of all the humans, except the psychopaths/sharks/whoever; we don’t find their values to be worth satisfying, so they’re going to be excluded from this”?
I would guess, for instance, that few people here would say: yeah, along with satisfying the values of all humans, let’s also satisfy the values of all the paperclip maximizers. We don’t find paperclip maximization to be a valid value, in that sense.
So my question to you is where you stand on all of that. Are there invalid values? Would you, in fact, try to satisfy Clippy’s values as well as those of humans? If not, how about sharks? Psychopaths? Etc.?
I will say that I think a world where beings are deriving utility from the perception of causing suffering without actually causing suffering isn’t inferior to a world where beings are deriving the same amount of utility from some other activity that doesn’t affect other beings, all else held equal.
Ok. Actually, I could take that as an answer to at least some of my above questions, but if you want to expand a bit on what I ask in this post, that would be cool.
However, it seems like it might be difficult to maintain enough control over the system to ensure that the pro-suffering beings don’t do anything that actually causes suffering.
Well, sure. But let’s keep this in the least convenient possible world, where such non-fundamental issues are somehow dealt with.
There’s a lot here, and I will try to address some specific points later. For now, I will say that personally I do not espouse utilitarianism for several reasons, so if you find me inconsistent with utilitarianism, no surprise there. Nor do I accept the complete elimination of all suffering and maximization of pleasure as a terminal value. I do not want to live, and don’t think most other people want to live, in a matrix world where we’re all drugged to our gills with maximal levels of L-dopamine and fed through tubes.
Eliminating torture, starvation, deprivation, deadly disease, and extreme poverty is good; but that’s not the same thing as saying we should never stub our toe, feel some hunger pangs before lunch, play a rough game of hockey, or take a risk climbing a mountain. The world of pure pleasure and no pain, struggle, or effort is a dystopia, not a utopia, at least in my view.
I suspect that giving any one single principle exclusive value is likely a path to a boring world tiled in paperclips. It is precisely the interaction among conflicting values and competing entities that makes the world interesting, fun, and worth living in. There is no single principle, not even maximizing pleasure and minimizing pain, that does not lead to dystopia when it is taken to its logical extreme and all other competing principles are thrown out. We are complicated and contradictory beings, and we need to embrace that complexity; not attempt to smooth it out.
Elharo, which is more interesting? Wireheading—or “the interaction among conflicting values and competing entities that makes the world interesting, fun, and worth living”? Yes, I agree, the latter certainly sounds more exciting; but “from the inside”, quite the reverse. Wireheading is always enthralling, whereas everyday life is often humdrum. Likewise with so-called utilitronium. To humans, utilitronium sounds unimaginably dull and monotonous, but “from the inside” it presumably feels sublime.
However, we don’t need to choose between aiming for a utilitronium shockwave and conserving the status quo. The point of recalibrating our hedonic treadmill is that life can be fabulously richer - in principle orders of magnitude richer—for everyone without being any less diverse, and without forcing us to give up our existing values and preference architectures. (cf. “The catechol-O-methyl transferase Val158Met polymorphism and experience of reward in the flow of daily life.”: http://www.ncbi.nlm.nih.gov/pubmed/17687265) In principle, there is nothing to stop benign (super)intelligence from spreading such reward pathway enhancements across the phylogenetic tree.
By the way. One question I always wanted to ask a pro-animal-rights type: would you support a program for the extinction/reductions of the population of predatory animals on the grounds that they cause large amounts of unnecessary suffering to their prey?
I’ve heard this posed as a “gotcha” question for vegetarians/vegans. The socially acceptable answer is the one that caters to two widespread and largely unexamined assumptions: that extinction is just bad, always, and that nature is just generally good. If the questioned responds in any other way, he or she can be written off right there. Who the hell thinks nature is a bad thing and genocide is a good thing?
But once you get past the idea that nature is somehow inherently good and that ending any particular species is inherently bad, there’s not really any way to justify allowing the natural world to exist the way it does if you can do something about it.
It’s a “gotcha” question for vegetarians because vegetarians in the real world are seldom vegetarians in a vacuum; their vegetarianism is typically associated and based on a cloud of other ideas that include respect for nature. In other words, it’s not a “gotcha” because you would write off the vegetarian who believes it, it’s because believing it would undermine his own core, but illogical and unstated, motives.
I’m parsing this as follows: I don’t have a good intuition on whose suffering matters, and unbounded utilitarianism is vulnerable to the Repugnant Conclusion, so I will pick an obvious threshold: humans and decide to not care about other animals until and unless the reason to care arises.
EDIT: the Schelling point for the caring threshold seems to be shifting toward progressively less intelligent (but still cute and harmless) species as time passes
EDIT: the Schelling point for the caring threshold seems to be shifting toward progressively less intelligent (but still cute and harmless) species as time passes
I tried. But it’s written in extreme Gwernian: well researched, but long, rambling and without a decent summary upfront. I skipped to the (also poorly written) conclusion, missing most of the arguments, and decided that it’s not worth my time. The essay would be right at home as a chapter in some dissertation, though.
Leaving aside the dynamics of the Schelling point, did the rest of my reply miss the mark?
What I mostly got out of it is that there are two big ways in which the circle of things with moral worth has shrunk rather than grown throughout history: it shrunk to exclude gods, and it shrunk to exclude dead people.
Leaving aside the dynamics of the Schelling point, did the rest of my reply miss the mark?
I’m not sure what your comment was intended to be, but if it was intended to be a summary of the point I was implicitly trying to make, then it’s close enough.
the Schelling point for the caring threshold seems to be shifting toward progressively less intelligent (but still cute and harmless) species as time passes
“Cute” I’ll give you. ”Harmless” I’m not sure about.
That is, it’s not in the least bit clear to me that I can reliably predict, from species S being harmful and cute, that the Schelling point you describe won’t/hasn’t shifted so as to include S on the cared-about side.
For clarity: I make no moral claims here about any of this, and am uninterested in the associated moral claims, I’m just disagreeing with the bare empirical claim.
The value of a species is not merely the sum of the values of the individual members of the species. I feel a moral obligation to protect and not excessively harm the environment without necessarily feeling a moral obligation to prevent each gazelle from being eaten by a lion. There is value in nature that includes the predator-prey cycle. The moral obligation to animals comes from their worth as animals, not from a utilitarian calculation to maximize pleasure and minimize pain. Animals living as animals in the wild (which is very different than animals living in a farm or as pets) will experience pleasure and pain; but even the ones too low on the complexity scale to feel pleasure and pain have value and should have a place to exist. I don’t know if an Orange Roughy feels pain or pleasure or not; but either way it doesn’t change my belief that we should stop eating them to avoid the extinction of the species.
The non-hypothetical, practical issue at hand is not do we make the world a better place for some particular species, but do we stop making it a worse one? Is it worth extinguishing a species so a few people can have a marginally tastier or more high status dinner? (whales, sharks, Patagonian Toothfish, etc.) Is it worth destroying a few dozen acres of forest containing the last habitat of a microscopic species we’ve never noticed so a few humans can play golf a little more frequently? I answer No, it isn’t. It is possible for the costs of an action to non-human species to outweigh the benefits gained by humans of taking that action.
Several people have been attempting to reductio my pro-human point of view, so I’ll do the same back to the pro-animal people here: how simple is the simplest animal you’re willing to assign moral worth to? Are you taking into account meta-uncertainty about the moral worth of even very simple animals? (What about living organisms outside of the animal kingdom, like bacteria? Viruses?) If you don’t care about organisms simple enough that they don’t suffer, does it seem “arbitrary” to you to single out a particular mental behavior as being the mental behavior that signifies moral worth? Does it seem “mindist” to you to single out having a particular kind of mind as being the thing that signifies moral worth?
If you calculated that assigning even very small moral worth to a simple but sufficiently numerous organism leads to the conclusion that the moral worth of non-human organisms on Earth strongly outweighs, in aggregate, the moral worth of humans, would you act on it (e.g. by making the world a substantially better place for some bacterium by infecting many other animals, such as humans, with it)?
If you were the only human left on Earth and you couldn’t find enough non-meat to survive on, would you kill yourself to avoid having to hunt to survive?
How do you resolve conflicts among organisms (e.g. predatorial or parasitic relationships)?
I don’t value animals per se, it is their suffering I care about and want to prevent. If it turns out that even the tiniest animals can suffer, I will take this into consideration. I’m already taking insects or nematodes into consideration probabilistically; I think it is highly unlikely that they are sentient, and I think that even if they are sentient, their suffering might not be as intense as that of mammals, but since their numbers are so huge, the well-being of all those small creatures makes up a non-negligible term in my utility function.
No, it seems completely non-arbitrary to me. Only sentient beings have a first-person point of view, only for them can states of the world be good or bad. A stone cannot be harmed in the same way a sentient being can be harmed. Introspectively, my suffering is bad because it is suffering, there is no other reason.
I don’t care about maximizing the amount of morally relevant entities, so this is an unlikely scenario. But I guess the point of your question is whether I am serious about the criteria I’m endorsing. Yes, I am. If my best estimates come out in a way leading to counterintuitive conclusions, and if that remains the case even if I adjust for overconfidence on my part before doing something irreversible, then I would indeed act accordingly.
The lives of most wild animals involve a lot of suffering already, and at some point, they are likely going to die painfully anyway. It is unclear whether me killing them (assuming I’d even be skilled enough to get one of them) would be net bad. I don’t intrinsically object to beings dying/being killed. But again, if it turns out that some action (e.g. killing myself) is what best fulfills the values I’ve come up with under reflection, I will do that, or, if I’m not mentally capable of doing it, I’d take a pill that would make me capable.
I don’t know, but I assume that an AI would be able to find a great solution. Maybe through reengineering animals so they become incapable of experiencing suffering, while somehow keeping the function of pain intact. Or maybe simply get rid of Darwinian nature and replace it, if that is deemed necessary, with something artificial and nice.
A priori, it seems that the moral weight of insects would either be dominated by their massive numbers or by their tiny capacities. It’s a narrow space where the two balance and you get a non-negligible but still-not-overwhelming weight for insects in a utility function. How did you decide that this was right?
I think there are good arguments for for suffering not being weighted by number of neurons and if you assign even a 10% to that being the case you end up with insects (and maybe nematodes and zooplankton) dominating the utility function because of their overwhelming numbers.
Having said that, ways on increasing the well being of these may be quite a bit different from increasing it for larger animals. In particular, because they so many of them die so within the first few days of life, their averaged life quality seems like it would be terrible. So reducing the populations looks like the current best option.
There may be good instrumental reasons for focusing on less controversial animals and hoping that they promote the kind of antispeciesism that spills over to concern about insects and does work for improving similar situations in the future.
For what is worth, here are the results of a survey that Vallinder and I circulated recently. 85% of expert respondents, and 89% of LessWrong respondents, believe that there is at least a 1% chance that insects are sentient, and 77% of experts and 69% of LessWrongers believe there is at least a 20% chance that they are sentient.
Very interesting. What were they experts in? And how many people responded?
They were experts in pain perception and related fields. We sent the survey to about 25 people, of whom 13 responded.
Added (6 November, 2015): If there is interest, I can reconstruct the list of experts we contacted. Just let me know.
Yes, my current estimate for that is less than 1%, but this is definitely something I should look into more closely. This has been on my to-do list for quite a while already.
Another thing to consider is that insects are a diverse bunch. I’m virtually certain that some of them aren’t conscious, see for instance this type of behavior. OTOH, cockroaches or bees seem to be much more likely to be sentient.
Yes. Bees and Cockroaches both have about a million neurons compared with maybe 100,000 for most insects.
Can you summarize the properties you look for when making these kinds of estimates of whether an insect is conscious/sentient/etc.? Or do you make these judgments based on more implicit/instinctive inspection?
I mostly do it by thinking about what I would accept as evidence of pain in more complex animals and see if it is present in insects. Complex pain behavior and evolutionary and functional homology relating to pain are things to look for.
There is a quite a bit of research on complex pain behavior in crabs by Robert Elwood. I’d link his site but it doesn’t seem to be up right now. You should be able to find the articles, though. Crabs have 100,000 neurons which is around what many insects have.
Here is a pdf of a paper that find that a bunch of common human mind altering drugs affecting crawfish and fruit flies.
Thanks.
It is quite implicit/instinctive. The problem is that without having solved the problem of consciousness, there is also uncertainty about what you’re even looking for. Nociception seems to be a necessary criterion, but it’s not sufficient. In addition, I suspect that consciousness’ adaptive role has to do with the weighting of different “possible” behaviors, so there has to be some learning behavior or variety in behavioral subroutines.
I actually give some credence to extreme views like Dennett’s (and also Eliezer’s if I’m informed correctly), which state that sentience implies self-awareness, but my confidence for that is not higher than 20%. I read a couple of papers on invertebrate sentience and I adjusted the expert estimates downwards somewhat because I have a strong intuition that many biologists are too eager to attribute sentience to whatever they are studying (also, it is a bit confusing because opinions are all over the place). Brian Tomasik lists some interesting quotes and material here.
And regarding the number of neurons thing, there I’m basically just going by intuition, which is unfortunate so I should think about this some more.
Ice9, perhaps consider uncontrollable panic. Some of the most intense forms of sentience that humans undergo seem to be associated with a breakdown of meta-cognitive capacity. So let’s hope that what it’s like to be an asphyxiating fish, for example, doesn’t remotely resemble what it feels like to be a waterboarded human. I worry that our intuitive dimmer-switch model of consciousness, i.e. more intelligent = more sentient, may turn out to be mistaken.
OK, thanks for clarifying.
Good point, there is reason to expect that I’m just assigning numbers in a way that makes the result come out convenient. Last time I did a very rough estimate, the expected suffering of insects and nematodes (given my subjective probabilities) came out around half the expected suffering of all decapodes/amphibians-and-larger wild animals. And then wild animals outnumber farm animals by around 2-3 orders of magnitude in terms of expected suffering, and farm animals outnumber humans by a large margin too. So if I just cared about current suffering, or suffering on earth only, then “non-negligible” would indeed be an understatement for insect suffering.
However, what worries me most is not the suffering that is happening on earth. If space colonization goes wrong or even non-optimal, the current amount of suffering could be multiplied by orders of magnitude. And this might happen even if our values will improve. Consider the case with farmed animals, humans probably never cared as much for the welfare of animals as they do now, but at the same time, we have never caused as much direct suffering to animals as we do now. If you’re primarily care about reducing the absolute amount of suffering, then whatever lets the amount of sentience skyrocket is a priori very dangerous.
Is the blue-minimizing robot suffering if it sees a lot of blue? Would you want to help alleviate that suffering by recoloring blue things so that they are no longer blue?
I don’t see the relevance of this question, but judging by the upvotes it received, it seems that I’m missing something.
I think suffering is suffering, no matter the substrate it is based on. Whether such a robot would be sentient is an empirical question (in my view anyway, it has recently come to my attention that some people disagree with this). Once we solve the problem of consciousness, it will turn out that such a robot is either conscious or that it isn’t. If it is conscious, I will try to reduce its suffering. If the only way to do that would involve doing “weird” things, I would do weird things.
The relevance is that my moral intuitions suggest that the blue-minimizing robot is morally irrelevant. But if you’re willing to bite the bullet here, then at least you’re being consistent (although I’m no longer sure that consistency is such a great property of a moral system for humans).
1) I am okay with humanely raised farm meat (I found a local butcher shop that sources from farms I consider ethical)
2) If I didn’t have access to civilization, I would probably end up hunting to survive, although I’d try to do so as rarely and humanely as was possible given my circumstances. (I’m only like 5% altruist, I just try to direct that altruism as effectively as possible and if push comes to shove I’m a primal animal that needs to eat. I’m skeptical of people who claim otherwise)
3) I’m currently okay with eating insects, mussels, and similar simplish animals, where I can make pretty good guesses about the lack of sentience of. (If insects do turn out to have sentience, that’s a pretty inconvenient world to have to live in, morally.)
4) I’m approximately average-preference-utilitarian. I value there being more creatures with more complex and interesting capacities for preference satisfaction (this is arbitrary and I’m fine with that). If I had to choose between humans and animals, I’d choose humans. But that’s not the choice offered to humans RE vegetarianism—what’s at stake is not humanity and complex relationships/art/intellectual-endeavors—it’s pretty straightforward pleasure (of a sort that I’m expect large swaths of the animal kingdom to be capable of experiencing—visceral enjoyment of food almost certainly evolved fairly early. You are not exercising any special human-ness to experience it)
Most people don’t need meat (or much of it) to be productive (the amount most people think they need is pretty grossly wrong), and the amount of hedonic satisfaction you’re getting from eating meat is vastly dwarfed by the anti-hedons that enabled it.
5) Ultimately, what I actually advocate is making the best decisions you can, given your circumstances. This includes trading off the willpower and energy you spend on Vegetarianism vs other ways you might be reducing suffering or increasing pleasure/joy/complex-beauty. I wouldn’t push too hard for an effective altruist to be Vegetarian. If you argue that devoting your “give a shit” energy is better spent on fighting poverty or injustice or preventing the destruction of the world by unfriendly AI, I won’t argue with you.
But I’d like people to at least have animal suffering on the radar of “things I’d like to give a shit about, if I had the energy, and that if it became much more convenient to care about, I’d make small modifications to my lifestyle.” So that when in-vitro meat becomes cheap and tasty, I think people should make the initial effort to switch over. (Possibly even while it’s still a bit more expensive). Meanwhile, humanely-raised meat tends to be tastier (it’s overall higher quality) so if you have leftover budget for nicer food in the first place, I’d consider that.
I don’t know how to resolve things like “the ecosystem is full of terribleness”. It is possible than plans that include “destroy all natural ecosystems” will turn out to be correct, but my prior on any given person deciding correctly to do that and execute on it without making lots of things worse is low.
This is pretty much the case for me. I was vegetarian for a while in high school–oddly enough, less for reducing-suffering ethical reasons than for “it costs fewer resources to produce enough plants to feed the world population than to produce enough meat, as animals have to be fed plants and are a low-efficiency conversion of plant calories, so in order to better use the planet’s resources, everyone should eat more plants and less meat.” I consistently ended up with low iron and B12. It’s possible to get enough iron, B12, and protein as a vegetarian, but you do have to plan your meals a bit more carefully (i.e. always have beans with rice so you get complete protein) and possibly eat foods that you don’t like as much. Right now I cook about one dish with meat in it per week, and I haven’t had any iron or B12 deficiency problems since graduating high school 4 years ago.
In general, I optimize food for low cost as well as health value and ethics, but if in-vitro meat became available, I think this is valuable enough in the long run that I would be willing to “subsidize” its production and commercialization by paying higher prices.
Oddly, this sentence is more or less exactly true for me as well. Only on LessWrong...
That reasoning does not seem to be either unique to or particularly prevalent on lesswrong.
Fair enough. I’ve never encountered it elsewhere, myself.
(Typically it is expressed as an additional excuse/justification for the political and personal position being taken for unrelated reasons.)
Could you (very briefly) expand on this, or even just give a link with a reasonably accessible explanation? I am curious.
From the American Dietetic Association: http://www.ncbi.nlm.nih.gov/pubmed/19562864
Interesting, thank you.
Well, considering the existence of healthy vegetarians, it seems clear that we evolved to be at least capable of surviving in a low-meat environment.
I don’t have any sources or anything, and I’m pretty lazy, but I’ve been vegetarian since childhood, and never had any health problems as a result AFAICT.
I am entirely willing to take your word on this, but you know what they say about “anecdote” and declensions thereof. In this case specifically, one of the few things that seem to be reliably true about nutrition is that “people are different, and what works for some may fail or be outright disastrous for others”.
In any case, Raemon seemed to be making a weaker claim than “vegetarianism has no serious health downsides”. “Healthy portions of meat amount to far less than the 32 oz steak a day implied by some anti-vegetarian doomsayers” is something I’m completely willing to grant.
Fair enough.
Considering the existence of healthy vegetarians, it seems clear that we evolved to be at least capable of surviving in a low-meat environment supported by modern agriculture that produces large quantities of concentrated non-meat protein in the form of tofu, eggs, whey protein, beans, and the like. This may be a happy accident. Are there any vegetarian hunter-gatherer societies?
Wouldn’t these be “gatherer societies” pretty much definitionally?
(Unless there are Triffids!)
Obligatory Far Side reference
I’ve been having a hell of a time finding trustworthy cites on this, possibly because there are so many groups with identity stakes in the matter—obesity researchers and advocates, vegetarians, and paleo diet adherents all have somewhat conflicting interests in ancestral nutrition. That said, this survey paper describes relatively modern hunter-gatherer diets ranging from 1% vegetable (the Nunamiut of Alaska) to 74% vegetable (the Gwi of Africa), with a mean somewhere around one third; no entirely vegetarian hunter-gatherers are described. This one describes societies subsisting on up to 90% gathered food (I don’t know whether or not this is synonymous with “vegetable”), but once again no exclusively vegetarian cultures and a mean around 30%.
I should mention by way of disclaimer that modern forager cultures tend to live in marginal environments and these numbers might not reflect the true ancestral proportions. And, of course, that this has no bearing either way on the ethical dimensions of the subject.
I’m having trouble finding… any kind of dietary information that isn’t obviously politicized (in any direction) right now.
But basically, when people think of a “serving” of meat, they imagine a large hunk of steak, when in fact a serving is more like the size of a deck of cards. A healthy diet has enough things going on in it besides meat that removing meat shouldn’t feel like it’s gutting out your entire source of pleasure from food.
Ah. Yeah, I don’t eat meat in huge chunks or anything. But meat sure is delicious, and comes in a bunch of different formats. Obviously removing meat would not totally turn my diet into a bleak, gray desert of bland gruel; I don’t think anyone would claim that. But it would make it meaningfully less enjoyable, on the whole.
This all seems pretty reasonable (except that I don’t think the validity of a human preference has much to do with how difficult it is for non-humans to have the same preference).
This fact seems to outweigh the rest of your comment.
Bugs, both true and not, are most definitely part of the animal kingdom.
Whoops. Edited.
It doesn’t seem like you’re really criticizing “pro-animal people”—you’re just critiquing utilitarianism. (e.g. “Is it arbitrary to state that suffering is bad?” “What if you could help others only at great expense to yourself?”)
Supposing one does accept utilitarian principles, is there any reason why we shouldn’t care about the suffering of non-humans?
This is half a criticism and half a reflection of arguments that have been used against my position that I think are problematic. To the extent that you think these arguments are problematic, I probably agree.
Resources spent on alleviating the suffering of non-humans are resources that aren’t spent on alleviating the suffering of humans, which I value a lot more.
That’s a false dichotomy. Resources that stop being spent on alleviating the suffering of non-humans do not automatically translate into resources that are spent on alleviating the suffering of humans. Nor is it the case that there are insufficient resources in the world today to eliminate most human suffering. The issue there is purely one of distribution of wealth, not gross wealth.
Yes, but they’re less available. Maybe I triggered the wrong intuition with the word “resources.” I had in mind resources like the time and energy of intelligent people, not resources like money. I think it’s plausible to guess that time and energy spent on one altruistic cause really does funge directly against time and energy spent on others, e.g. because of good-deed-for-the-day effects.
Why?
(Keeping in mind that we have agreed the basic tenets of utilitarianism are correct: pain is bad etc.)
Oh. No. Human pain is bad. The pain of sufficiently intelligent animals might also be bad. Fish pain and under is irrelevant.
There is nothing inconsistent about valuing the pain of some animals, but not of others. That said, I find the view hard to believe. When I reflect on why I think pain is bad, it seems clear that my belief is grounded in the phenomenology of pain itself, rather than in any biological or cognitive property of the organism undergoing the painful experience.
Pain is bad because it feels bad. That’s why I think pain should be alleviated irrespective of the species in which it occurs.
I don’t share these intuitions. Pain is bad if it happens to something I care about. I don’t care about fish.
I don’t care about fish either. I care about pain. It just so happens that fish can experience pain.
Truthfully, I’m not even sure I believe pain is bad in the relevant sense. It’s certainly something I’d prefer to avoid under most circumstances, but when I think about it in detail there always ends up being a “because” in there: because it monopolizes attention, because in sufficient quantity it can thoroughly screw up your motivational and emotional machinery, because it’s often attached to particular actions in a way that limits my ability to do things. It doesn’t feel like a root-level aversion to my reasoning self: when I’ve torn a ligament and can’t flex my foot in a certain way without intense stabbing agony, I’m much more annoyed by the things it prevents me from doing than by the pain it gives me, and indeed I remember the former much better than the latter.
I haven’t thought this through rigorously, but if I had to take a stab at it right now I’d say that pain is bad in roughly the same way that pleasure is good: in other words, it works reasonably well as a rough experiential pointer to the things I actually want to avoid, and it does place certain constraints on the kind of life I’d want to live, but I’d expect trying to ground an entire moral system in it to give me some pretty insane results once I started looking at corner cases.
You probably don’t want to draw the line at fish.
What point are you trying to make with that link?
Probably that fish don’t seem to be hugely different from amphibians/reptiles, birds, and mammals in terms of the six substitute-indicators-for-feeling-pain, and so it’s hard to say whether their pain experience is different.
I would agree that fish pain is less relevant than human pain (they have a central nervous system, yes, but less of one, and a huge part of what makes human pain bad is the psychological suffering associated with it).
My claim was that I don’t care about fish pain, not that fish pain is too different from human pain to matter. Rather, fish are too different from humans to matter.
Could you expand on this idea?
Fair enough. I think “too X to matter” is a complex concept, though.
How is the statement “fish and humans feel pain approximately equally” different from the statement “we should care about fish and human pain approximately equally?”
You and I feel pain approximately equally, but I care about mine a lot more than about yours.
Do you consider this part of morality?
I mean, I personally experience selfish emotions, but I usually, y’know, try to override them?
Most people probably wouldn’t consider that moral as such (though they’d likely be okay with it on pragmatic grounds), but the more general idea of treating some people’s pain as more significant than others’ is certainly consistent with a lot of moral systems. Common privileged categories: friends, relatives, children, the weak or helpless, people not considered evil.
It’s perfectly moral for me to be selfish to some degree, yes. I cannot care about others if I don’t care about myself. You might work differently, but utter unselfishness seems like an anomaly.
It also seems like a lie (to the self or to others).
Fair enough. To restate but with different emphasis: “we should care about fish and human pain approximately equally?”
“I care about X’s pain” is mostly a statement about X, not a statement about pain. I don’t care about fish and I care about humans. You may not share this moral preference, but are you claiming that you don’t even understand it?
No, I have a lot of biases like this: the halo effect makes me think that humans’ ability to do math makes our suffering more important, “what you see is all there is” allows me to believe that slaughterhouses which operate far away must be morally acceptable, and so forth.
Anyway, fish suffering isn’t a make-or-break decision. People very frequently have the opportunity to choose a bean burrito over a chicken one (or even a beef burrito over a chicken one), and from what Peter has presented here it seems like this is an extremely effective way to reduce suffering.
I may be misunderstanding you, but I thought you were suggesting that there is a non-arbitrary set of physiological features that vertebrates share but fish don’t. I was pointing out that this doesn’t seem to be the case.
No, I’m suggesting that I don’t care about fish.
Can’t speak for all vegetarians/pro-animal-rights types, but I personally discount based on complexity (or intelligence of whatever.)
That’s not the same as discounting simpler creatures altogether—at least not when we’re discussing, say, pigs.
(At what point do you draw the line to start valuing creatures, by the way? Chimpanzees? Children? Superintelligent gods? Just curious, this isn’t a reductio.)
Right, but what’s the discount rate? What does your discount rate imply is the net moral worth of all mosquitoes on the planet? All bacteria?
I’m not sure where my line is either. It’s hovering around pigs and dolphins at the moment.
I’m not sure what the discount rate is, which is largely why I asked if you were sure about where the line was. I mostly go off intuition for determining how much various species are worth, so if you throw scope insensitivity into the mix...
Would you apply said discount rate intraspecies in addition to interspecies?
By the way. One question I always wanted to ask a pro-animal-rights type: would you support a program for the extinction/reductions of the population of predatory animals on the grounds that they cause large amounts of unnecessary suffering to their prey?
Yes. Assuming that prey populations are kept from skyrocketing (e.g. through the use of immunocontraception) since that too would result in large amounts of unnecessary suffering.
Eugine, in answer to your question: yes. If we are committed to the well-being of all sentience in our forward light-cone, then we can’t simultaneously conserve predators in their existing guise. (cf. http://www.abolitionist.com/reprogramming/index.html) Humans are not obligate carnivores; and the in vitro meat revolution may shortly make this debate redundant; but it’s questionable whether posthuman superintelligence committed to the well-being of all sentience could conserve humans in their existing guise either.
This is, sadly, not a hypothetical question. This is an issue wildlife managers face regularly. For example, do you control the population of Brown-headed Cowbirds in order to maintain or increase the population of Bell’s Vireo or Kirtlands Warbler? The answer is not especially controversial. The only questions are which methods of predator control are most effective, and what unintended side effects might occur. However these are practical, instrumental questions, not moral ones.
Where this comes into play in the public is in the conflict between house cats and birds. In particular, the establishment of feral cat colonies causes conflicts between people who preference non-native, vicious but furry and cute predators and people who preference native, avian, non-pet species. Indeed, this is one of the problems I have with many animal rights groups such as the Humane Society. They’re not pro-animal rights, just pro-pet species rights.
A true concern for animals needs to treat animals as animals, not as furry baby human substitutes. We need to value the species as a whole, not just the individual members; and we need to value their inherent nature as predators and prey. A Capuchin Monkey living in a zoo safe from the threat of Harpy Eagles leads a life as limited and restricted as a human living in Robert Nozick’s Experience Machine. While zoos have their place, we should not seek to move all wild creatures into safe, sterile environments with no predators, pain, or danger any more than we would move all humans into isolated, AI-created virtual environments with no true interaction with reality.
Elharo, I take your point, but surely we do want humans to enjoy healthy lives free from hunger and disease and safe from parasites and predators? Utopian technology promises similar blessings to nonhuman sentients too. Human and nonhuman animals alike typically flourish best when free- living but not “wild”.
I’m not quite sure what you’re saying here. Could you elaborate or rephrase?
Why?
Assuming that these environments are (or would be) on the whole substantially better on the measures that matter to the individual living in them, why shouldn’t we?
We’re treading close to terminal values here. I will express some aesthetic preference for nature qua nature. However I also recognize a libertarian attitude that we should allow other individuals to live the lives they choose in the environments they find themselves to the extent reasonably possible, and I see no justification for anthropocentric limits on such a preference.
Absent strong reasons otherwise, “do no harm” and “careful, limited action” should be the default position. The best we can do for animals that don’t have several millennia of adaptation to human companionship (i.e. not dogs, cats, and horses) is to leave them alone and not destroy their natural habitat. Where we have destroyed it, attempt to restore it as best we can, or protect what remains. Focus on the species, not the individual. We have neither the knowledge nor the will to protect individual, non-pet animals.
When you ask, “Assuming that these environments are (or would be) on the whole substantially better on the measures that matter to the individual living in them, why shouldn’t we?” it’s not clear to me whether you’re referring to why we shouldn’t move humans into virtual boxes or why we shouldn’t move animals into virtual boxes, or both. If you’re talking about humans, the answer is because we don’t get to make that choice for other humans. I for one have no desire to live my life in Nozick box, and will oppose anyone who tries to put me in one while I’m still capable of living a normal life. If you’re referring to animals, the argument is similar though more indirect. Ultimately humans should not take it upon themselves to decide how another species lives. The burden of proof rests on those who wish to tamper with nature, not those who wish to leave it alone.
That strikes me as inconsistent, assuming that preventing suffering/minimizing disutility is also a terminal value. In those terms, nature is bad. Really, really bad.
It seems arbitrary to exclude the environment from the cluster of factors that go into living “the lives they choose.” I choose to not live in a hostile environment where things much larger than me are trying to flay me alive, and I don’t think it’s too much of a stretch to assume that most other conscious beings would choose the same if they knew they had the option.
Taken with this...
...it seems like you don’t really have a problem with animal suffering, as long as human beings aren’t the ones causing it. But the gazelle doesn’t really care whether she’s being chased down by a bowhunter or a lion, although she might arguably prefer that the human kill her if she knew what was in store for her from the lion.
I still don’t know why you think we ought to value predators’ “inherent nature” as predators or treat entire species as more important than their constituent individuals. My follow-up questions would be:
(1) If there were a species of animal who fed on the chemicals produced from intense, prolonged suffering and fear, would we be right to value its “inherent nature” as a torturer? Would it not be justifiable to either destroy it or alter it sufficiently that it didn’t need to torture other creatures to eat?
(2) What is the value in keeping any given species in existence, assuming that its disappearance would have an immense positive effect on the other conscious beings in its environment? Why is having n species necessarily better than having n-1? Presumably, you wouldn’t want to add the torture-predators in the question above to our ecosystem—but if they were already here, would you want them to continue existing? Are worlds in which they exist somehow better than ours?
We certainly know enough to be able to cure their most common ailments, ease their physical pain, and prevent them from dying from the sort of injuries and illnesses that would finish them off in their natural environments. Our knowledge isn’t perfect, but it’s a stretch to say we don’t have “the knowledge to protect” them. I suspect that our will to do so is constrained by the scope of the problem. “Fixing nature” is too big a task to wrap our heads around—for now. That might not always be the case.
Both.
Then that environment wouldn’t be better on the measures that matter to you, although I suspect that there is some plausible virtual box sufficiently better on the other measures that you would prefer it to the box you live in now. I have a hard time understanding what is so unappealing about a virtual world versus the “real one.”
This suggests to me that you haven’t really internalized exactly how bad it is to be chased down by something that wants to pin you down and eat parts of you away until you finally die.
To prove what?
Two values being in conflict isn’t necessarily inconsistent, it just mean that you have to make trade-offs.
An example of the importance of predators I happened across recently:
“Safer Waters”, Alisa Opar, Audubon, July-August 2013, p. 52
This is just one example of the importance of top-level predators for everything in the ecosystem. Nature is complex and interconnected. If you eliminate some species because you think they’re mean, you’re going to damage a lot more.
This is an excellent example of how it’s a bad idea to mess with ecosystems without really knowing what you’re doing. Ideally, any intervention should be tested on some trustworthy (ie. more-or-less complete, and experimentally verified) ecological simulations to make sure it won’t have any catastrophic effects down the chain.
But of course it would be a mistake to conclude from this that keeping things as they are is inherently good.
I’d just like to point out that (a) “mean” is a very poor descriptor of predation (neither its severity nor its connotations re: motivation do justice to reality), and (b) this use of “damage” relies on the use of “healthy” to describe a population of beings routinely devoured alive well before the end of their natural lifespans. If we “damaged” a previously “healthy” system wherein the same sorts of things were happening to humans, we would almost certainly consider it a good thing.
If “natural lifespans” means what they would have if they weren’t eaten, it’s a tautology. If not, what does it mean? The shark’s “natural” lifespan requires that it eats other creatures. Their “natural” lifespan requires that it does not.
Yes, I’m using “natural lifespan” here as a placeholder for “the typical lifespan assuming nothing is actively trying to kill you.” It’s not great language, but I don’t think it’s obviously tautological.
Yes. My question is whether that’s a system that works for us.
We can say, “Evil sharks!” but I don’t feel any need to either exterminate all predators from the world, nor to modify them to graze on kelp. Yes, there’s a monumental amount of animal suffering in the ordinary course of things, even apart from humans. Maybe there wouldn’t be in a system designed by far future humans from scratch. But radically changing the one we live in when we hardly know how it all works—witness the quoted results of overfishing shark—strikes me as quixotic folly.
It strikes me as folly, too. But “Let’s go kill the sharks, then!” does not necessarily follow from “Predation is not anywhere close to optimal.” Nowhere have I (or anyone else here, unless I’m mistaken) argued that we should play with massive ecosystems now.
I’m very curious why you don’t feel any need to exterminate or modify predators, assuming it’s likely to be something we can do in the future with some degree of caution and precision.
That sort of intervention is too far in the future for me to consider it worth thinking about. People of the future can take care of it then. That applies even if I’m one of those people of the far future (not that I expect to be). Future-me can deal with it, present-me doesn’t care or need to care what future-me decides.
In contrast, smallpox, tuberculosis, cholera, and the like are worth exterminating now, because (a) unlike the beautiful big fierce animals, they’re no loss in themselves, (b) it doesn’t appear that their loss will disrupt any ecosystems we want to keep, and (c) we actually can do it here and now.
There’s something about this sort of philosophy that I’ve wondered about for a while.
Do you think that deriving utility from the suffering of others (or, less directly, from activities that necessarily involve the suffering of others) is a valid value? Or is it intrinsically invalid?
That is, if we were in a position to reshape all of reality according to our whim, and decided to satisfy the values of all morally relevant beings, would we also want to satisfy the values of beings that derive pleasure/utility from the suffering of others, assuming we could do so without actually inflicting disutility/pain on any other beings?
And more concretely: in a “we are now omnipotent gods” scenario where we could, if we wanted to, create for sharks an environment where they could eat fish to their hearts’ content (and these would of course be artificial fish without any actual capacity for suffering, unbeknownst to the sharks) — would we do so?
Or would we judge the sharks’ pleasure from eating fish to be an invalid value, and simply modify them to not be predators?
The shark question is perhaps a bit esoteric; but if we substitute “psychopaths” or “serial killers” for “sharks”, it might well become relevant at some future date.
I’m not sure what you mean by “valid” here—could you clarify? I will say that I think a world where beings are deriving utility from the perception of causing suffering without actually causing suffering isn’t inferior to a world where beings are deriving the same amount of utility from some other activity that doesn’t affect other beings, all else held equal. However, it seems like it might be difficult to maintain enough control over the system to ensure that the pro-suffering beings don’t do anything that actually causes suffering.
Sure. By “valid” I mean something like “worth preserving”, or “to be endorsed as a part of the complex set of values that make up human-values-in-general”.
In other words, in the scenario where we’re effectively omnipotent (for this purpose, at least), and have decided that we’re going to go ahead and satisfy the values of all morally relevant beings — are we going to exclude some values? Or exclude some beings on the basis of their values? For example: should we, in such a scenario, say: “we’ll satisfy the values of all the humans, except the psychopaths/sharks/whoever; we don’t find their values to be worth satisfying, so they’re going to be excluded from this”?
I would guess, for instance, that few people here would say: yeah, along with satisfying the values of all humans, let’s also satisfy the values of all the paperclip maximizers. We don’t find paperclip maximization to be a valid value, in that sense.
So my question to you is where you stand on all of that. Are there invalid values? Would you, in fact, try to satisfy Clippy’s values as well as those of humans? If not, how about sharks? Psychopaths? Etc.?
Ok. Actually, I could take that as an answer to at least some of my above questions, but if you want to expand a bit on what I ask in this post, that would be cool.
Well, sure. But let’s keep this in the least convenient possible world, where such non-fundamental issues are somehow dealt with.
There’s a lot here, and I will try to address some specific points later. For now, I will say that personally I do not espouse utilitarianism for several reasons, so if you find me inconsistent with utilitarianism, no surprise there. Nor do I accept the complete elimination of all suffering and maximization of pleasure as a terminal value. I do not want to live, and don’t think most other people want to live, in a matrix world where we’re all drugged to our gills with maximal levels of L-dopamine and fed through tubes.
Eliminating torture, starvation, deprivation, deadly disease, and extreme poverty is good; but that’s not the same thing as saying we should never stub our toe, feel some hunger pangs before lunch, play a rough game of hockey, or take a risk climbing a mountain. The world of pure pleasure and no pain, struggle, or effort is a dystopia, not a utopia, at least in my view.
I suspect that giving any one single principle exclusive value is likely a path to a boring world tiled in paperclips. It is precisely the interaction among conflicting values and competing entities that makes the world interesting, fun, and worth living in. There is no single principle, not even maximizing pleasure and minimizing pain, that does not lead to dystopia when it is taken to its logical extreme and all other competing principles are thrown out. We are complicated and contradictory beings, and we need to embrace that complexity; not attempt to smooth it out.
Elharo, which is more interesting? Wireheading—or “the interaction among conflicting values and competing entities that makes the world interesting, fun, and worth living”? Yes, I agree, the latter certainly sounds more exciting; but “from the inside”, quite the reverse. Wireheading is always enthralling, whereas everyday life is often humdrum. Likewise with so-called utilitronium. To humans, utilitronium sounds unimaginably dull and monotonous, but “from the inside” it presumably feels sublime.
However, we don’t need to choose between aiming for a utilitronium shockwave and conserving the status quo. The point of recalibrating our hedonic treadmill is that life can be fabulously richer - in principle orders of magnitude richer—for everyone without being any less diverse, and without forcing us to give up our existing values and preference architectures. (cf. “The catechol-O-methyl transferase Val158Met polymorphism and experience of reward in the flow of daily life.”: http://www.ncbi.nlm.nih.gov/pubmed/17687265) In principle, there is nothing to stop benign (super)intelligence from spreading such reward pathway enhancements across the phylogenetic tree.
I’ve heard this posed as a “gotcha” question for vegetarians/vegans. The socially acceptable answer is the one that caters to two widespread and largely unexamined assumptions: that extinction is just bad, always, and that nature is just generally good. If the questioned responds in any other way, he or she can be written off right there. Who the hell thinks nature is a bad thing and genocide is a good thing?
But once you get past the idea that nature is somehow inherently good and that ending any particular species is inherently bad, there’s not really any way to justify allowing the natural world to exist the way it does if you can do something about it.
It’s a “gotcha” question for vegetarians because vegetarians in the real world are seldom vegetarians in a vacuum; their vegetarianism is typically associated and based on a cloud of other ideas that include respect for nature. In other words, it’s not a “gotcha” because you would write off the vegetarian who believes it, it’s because believing it would undermine his own core, but illogical and unstated, motives.
The former effect would generally be a heckuva lot smaller than the latter.
I’m parsing this as follows: I don’t have a good intuition on whose suffering matters, and unbounded utilitarianism is vulnerable to the Repugnant Conclusion, so I will pick an obvious threshold: humans and decide to not care about other animals until and unless the reason to care arises.
EDIT: the Schelling point for the caring threshold seems to be shifting toward progressively less intelligent (but still cute and harmless) species as time passes
Have you read The Narrowing Circle?
I tried. But it’s written in extreme Gwernian: well researched, but long, rambling and without a decent summary upfront. I skipped to the (also poorly written) conclusion, missing most of the arguments, and decided that it’s not worth my time. The essay would be right at home as a chapter in some dissertation, though.
Leaving aside the dynamics of the Schelling point, did the rest of my reply miss the mark?
What I mostly got out of it is that there are two big ways in which the circle of things with moral worth has shrunk rather than grown throughout history: it shrunk to exclude gods, and it shrunk to exclude dead people.
I’m not sure what your comment was intended to be, but if it was intended to be a summary of the point I was implicitly trying to make, then it’s close enough.
… are you including chimpanzees there, by any chance?
“Cute” I’ll give you.
”Harmless” I’m not sure about.
That is, it’s not in the least bit clear to me that I can reliably predict, from species S being harmful and cute, that the Schelling point you describe won’t/hasn’t shifted so as to include S on the cared-about side.
For clarity: I make no moral claims here about any of this, and am uninterested in the associated moral claims, I’m just disagreeing with the bare empirical claim.
I think it’s simply a case of more animals moving into the harmless category as our technology improves.
The value of a species is not merely the sum of the values of the individual members of the species. I feel a moral obligation to protect and not excessively harm the environment without necessarily feeling a moral obligation to prevent each gazelle from being eaten by a lion. There is value in nature that includes the predator-prey cycle. The moral obligation to animals comes from their worth as animals, not from a utilitarian calculation to maximize pleasure and minimize pain. Animals living as animals in the wild (which is very different than animals living in a farm or as pets) will experience pleasure and pain; but even the ones too low on the complexity scale to feel pleasure and pain have value and should have a place to exist. I don’t know if an Orange Roughy feels pain or pleasure or not; but either way it doesn’t change my belief that we should stop eating them to avoid the extinction of the species.
The non-hypothetical, practical issue at hand is not do we make the world a better place for some particular species, but do we stop making it a worse one? Is it worth extinguishing a species so a few people can have a marginally tastier or more high status dinner? (whales, sharks, Patagonian Toothfish, etc.) Is it worth destroying a few dozen acres of forest containing the last habitat of a microscopic species we’ve never noticed so a few humans can play golf a little more frequently? I answer No, it isn’t. It is possible for the costs of an action to non-human species to outweigh the benefits gained by humans of taking that action.
Why?
What worth?
Where does this belief come from?