Eliezer and his colleagues hope to exercise a lot of control over the future. If he is inadvertently promoting bad values to those around him (e.g. it’s OK to harm the weak), he is increasing the chance that any influence they have will be directed towards bad outcomes.
That has very little to do with whether Eliezer should make public declarations of things. Are you of the opinion that Eliezer does not share your view on this matter? (I don’t know whether he does, personally.) If so, you should be attempting to convince him, I guess. If you think that he already agrees with you, your work is done. Public declarations would only be signaling, having little to do with maximizing good outcomes.
As for the other thing — I should think the fact that we’re having some disagreement in the comments on this very post, about whether animal suffering is important, would be evidence that it’s not quite as uncontroversial as you imply. I am also not aware of any Less Wrong post or sequence establishing (or really even arguing for) your view as the correct one. Perhaps you should write one? I’d be interested in reading it.
I am also not aware of any Less Wrong post or sequence establishing (or really even arguing for) your view as the correct one.
I think we should be wary of reasoning that takes the form: “There is no good argument for x on Less Wrong, therefore there are likely no good arguments for x.”
Certainly we should, but that was not my reasoning. What I said was:
I don’t think that we should take this thesis (“suffering (and pleasure) are important where-ever they occur, whether in humans or mice”) to be well-established and uncontroversial, even among the transhumanist/singularitarian/lesswrongian crowd. [emphasis added]
I object to treating an issue as settled and uncontroversial when it’s not. And the implication was that if this issue is not settled here, then it’s likely to be even less settled elsewhere; after all, we do have a greater proportion of vegetarians here at Less Wrong than in the general population.
“I will act as if this is a settled issue” in such a case is an attempt to take an epistemic shortcut. You’re skipping the whole part where you actually, you know, argue for your viewpoint, present reasoning and evidence to support it, etc. I would like to think that we don’t resort to such tricks here.
If caring about animal suffering is such a straightforward thing, then please, write a post or two outlining the reasons why. Posters on Less Wrong have convinced us of far weirder things; it’s not as if this isn’t a receptive audience. (Or, if there are such posts and I’ve just missed them, link please. Or! If you think there are very good, LW-quality arguments elsewhere, why not write a Main post with a few links, with maybe brief summaries of each?)
SaidAchmiz, you’re right. The issue isn’t settled: I wish it were so. The Transhumanist Declaration (1998, 2009) of the World Transhumanist Association / Humanity Plus does express a non-anthropocentric commitment to the well-being of all sentience.
[“We advocate the well-being of all sentience, including humans, non-human animals, and any future artificial intellects, modified life forms, or other intelligences to which technological and scientific advance may give rise” : http://humanityplus.org/philosophy/transhumanist-declaration/]
But I wonder what percentage of lesswrongers would support such a far-reaching statement?
Mentioning “non-human animals” in the same sentence and context along with humans and AIs, and “other intelligences” (implying that non-human animals may be usefully referred to as “intelligences”, i.e. that they are similar to humans along the relevant dimensions here, such as intelligence, reasoning capability, etc.) reads like an attempt to smuggle in a claim by means of that implication. Now, I don’t impute ignoble intent to the writers of that declaration; they may well consider the question settled, and so do not consider themselves to be making any unsupported claims. But there’s clearly a claim hidden in that statement, and I’d like to see it made quite explicit, at least, even if you think it’s not worth arguing for.
That is, of course, apart from my belief that animals do not have intrinsic moral value. (To be truthful, I often find myself more annoyed with bad arguments than wrong beliefs or bad deeds.)
I object to treating an issue as settled and uncontroversial when it’s not. And the implication was that if this issue is not settled here, then it’s likely to be even less settled elsewhere; after all, we do have a greater proportion of vegetarians here at Less Wrong than in the general population.
Those who have thought most about this issue, namely professional moral philosophers, generally agree (1) that suffering is bad for creatures of any species and (2) that it’s wrong for people to consume meat and perhaps other animal products (the two claims that seem to be the primary subjects of dispute in this thread). As an anecdote, Jeff McMahan—a leading ethicist and political philosopher—mentioned at a recent conference that the moral case for vegetarianism was one of the easiest cases to make in all philosophy (a discipline where peer disagreement is pervasive).
I mention this, not as evidence that the issue is completely settled, but as a reply to your speculation that there is even more disagreement in the relevant community outside Less Wrong.
(Or, if there are such posts and I’ve just missed them, link please. Or! If you think there are very good, LW-quality arguments elsewhere, why not write a Main post with a few links, with maybe brief summaries of each?)
Frankly, I’m baffled by your insistence that the relevant arguments must be found in the Less Wrong archives. There’s plenty of good material out there which I’m happy to recommend if you are interested in reading what others who have thought about these issues much more than either of us have written on the subject.
Those who have thought most about this issue, namely professional moral philosophers, almost universally agree [...] that it’s wrong for people to consume meat and perhaps other animal products
Citation needed. :)
As an anecdote, Jeff McMahan mentioned at a recent conference that the moral case for vegetarianism was one of the easiest cases to make in all philosophy (a discipline where peer disagreement is pervasive).
It’s interesting that you use Jeff McMahan as an example. In his essay The Meat Eaters, McMahan makes some excellent arguments; his replies to the “playing God” and “against Nature” objections, for instance, are excellent examples of clear reasoning and argument, as is his commentary on the sacredness of species. (As an aside, when McMahan started talking about the hypothetical modification or extinction of carnivorous species, I immediately thought of Stanislaw Lem’s Return From the Stars, where the human civilization of a century hence has chemically modified all carnivores, including humans, to be nonviolent, evidently having found some way to solve the ecological issues.)
But one thing he doesn’t do is make any argument for why we should care about the suffering of animals. The moral case, as such, goes entirely unmade; McMahan only alludes to its obviousness once or twice. If he thinks it’s an easy case to make — perhaps he should go ahead and make it! (Maybe he does elsewhere? If so, a quick googling does not turn it up. Links, as always, would be appreciated.) He just takes “animal suffering is bad” as an axiom. Well, fair enough, but if I don’t share that axiom, you wouldn’t expect me to be convinced by his arguments, yes?
I mention this, not as evidence that the issue is completely settled, but as a reply to your speculation that there is even more disagreement in the relevant community outside Less Wrong.
I don’t think the relevant community outside Less Wrong is professional moral philosophers. I meant something more like… “intellectuals/educated people/technophiles/etc. in general”, and then even more broadly than that, “people in general”. However, this is a peripheral issue, so I’m ok with dropping it.
Frankly, I’m baffled by your insistence that the relevant arguments must be found in the Less Wrong archives. There’s plenty of good material out there which I’m happy to recommend if you are interested in reading what others who have thought about these issues much more than either of us have written on the subject.
In case it wasn’t clear (sorry!), yes, I am interested in reading good material elsewhere (preferably in the form of blog posts or articles rather than entire books or long papers, at least as summaries); if you have some to recommend, I’d appreciate it. I just think that if such very convincing material exists, you (or someone) should post it (links or even better, a topic summary/survey) on Less Wrong, such that we, a community with a high level of discourse, may discuss, debate, and examine it.
(FWIW, I’m not the one downvoting your comments, and I think it’s a shame that the debate has become so “politicized”.)
Here are a couple of relevant survey articles:
Jeff McMahan, Animals, in The Blackwell Companion to Applied Ethics, Oxford: Blackwell, 2002, pp. 525-536.
Stuart Rachels, Vegetarianism, in The Oxford Handbook of Animal Ethics, Oxford: Oxford University Press, 2012, pp. 877–905.
On the seriousness of suffering, see perhaps
Thomas Nagel, Pleasure and Pain, in The View from Nowhere, Oxford: Oxford University Press, 1986, pp. 156-162.
--
Here are some quotes about pain from contemporary moral philosophers which I believe are fairly representative. (I don’t have any empirical studies to back this up, other than my impression from interacting with this community for several years, and my inability to find even a single quote that supports the contrary position.)
When I am in pain, it is plain, as plain as anything is, that what I am experiencing is bad.
Guy Kahane, The Sovereignty of Suffering: Reflections on Pain’s Badness, 2004, p. 2
Some things are bad without it being the case that we have a prima facie duty to get rid of them. The badness of suffering is different. Here I need to use somewhat metaphorical language to get across what seems to me to be the heart of the matter. Where there is suffering, there exists a demand or an appeal for the prevention of that suffering. I say “a demand or an appeal,” but this demand does not issue from anyone in particular, nor is it addressed to anyone in particular. We might say (again metaphorically) that suffering cries out for its own abolition or cancellation.
Jamie Mayerfeld, Suffering and Moral Responsibility, Oxford, 2002, p. 111.
[Pain] is a bad thing in itself. It does not matter who experiences it, or where it comes in a life, or where in the course of a painful episode. Pain is bad; it should not happen. There should be as little pain as possible in the world, however it is distributed across people and across time.
John Broome, ‘More Pain or Less?’, Analysis, vol. 56, no. 2 (April, 1996), p. 117
it seems to me that certain things, such as pain and suffering to take the clearest example, are bad. I don’t think I’m just making that up, and I don’t think that is just an arbitrary personal preference of mine. If I put my finger in a flame, I have a certain experience, and I can directly see something about it (about the experience) that is bad. Furthermore, if it is bad when I experience pain, it seems that it must also be bad when someone else experiences pain. Therefore, I should not inflict such pain on others, any more than they should inflict it on me. So there is at least one example of a rational moral principle.
Michael Huemer, Ethical Intuitionism, Basingstoke, Hampshire, 2005, p. 250.
The idea that it is wrong to cause suffering, unless there is a sufficient justification, is one of the most basic moral principles, shared by virtually anyone.
James Rachels, ‘Animals and Ethics’, in Edward Craig (ed.), Routledge Encyclopedia of Philosophy, London, 1998, sect. 3.
Thank you! This is an impressive array of references, and I will read at least some of them as soon as I have time. I very much appreciate you taking the time to collect and post them.
(FWIW, I’m not the one downvoting your comments, and I think it’s a shame that the debate has become so “politicized”.)
Thank you. The downvotes don’t worry me too much, at least partly because I continue to be unsure about what down/upvotes even mean on this site. (It seems to be an emotivist sort of yay/boo thing? Not that there’s necessarily anything terribly wrong with that, it just doesn’t translate to very useful data, especially in small quantities.)
To anyone who is downvoting my comments: I’d be curious to hear your reasons, if you’re willing to explain them publicly. Though I do understand if you want to remain anonymous.
Stuart Rachels, Vegetarianism, in The Oxford Handbook of Animal Ethics, Oxford: Oxford University Press, 2012, pp. 877–905.
So, I’ve just finished reading this one.
To say that I found it unconvincing would be quite the understatement.
For one, Rachels seems entirely unwilling to even take seriously any objections to his moral premises or argument (he, again, takes the idea that we should care about animal suffering as given). He dismisses the strongest and most interesting objections outright; he selects the weakest objections to rebut, and condescendingly adds that “Resistance to [such] arguments usually stems from emotion, not reason. … Moreover, they [opponents of his argument] want to justify their next hamburger.”
Rachels then launches into a laundry list of other arguments against eating factory farmed animals, not based on a moral concern for animals. It seems that factory farming is bad in literally every way! It’s bad for animals, it’s bad for people, it causes diseases, eating meat is bad for our health, and more, and more.
(I’m always wary of such claims. When someone tells you thing A has bad effect X, you listen with concern; when they add that oh yeah, it also had bad effect Y! And Z! And W! … and then you discover that their political/ideological alignment is “opponent of thing A”… suspicion creeps in. Can eating meat really just be universally bad, bad in every way, irredeemably bad so as to be completely unmotivated? Well, there’s no law of nature that says that can’t be the case (e.g. eating uranium probably has no upside), but I’m inclined to treat such claims with skepticism, and, in any case, I’d prefer each aspect of meat-eating to be argued against separately, such that I can evaluate them individually, not be faced with a shotgun barrage of everything at once.)
Incidentally, I find the “factory farming is detrimental to local human populations” argument much more convincing than any of the others, certainly far more so than the animal-suffering argument. If the provided facts are accurate, then that’s the most salient case for stopping the practice — or, preferably, reforming it so as to mitigate the environmental and public-health impact.
I assign the “eating meat is bad for you” argument negligible weight. The one universal truth I’ve observed about nutrition claims is that finding someone else who’s making the opposite claim is trivial. (The corollary is that generalizing nutritional findings to all humans in all circumstances is nigh-impossible.) Red meat reduces lifespan? But the peoples of the Caucasus highlands eat almost nothing but red meat, and they’ve got some of the longest lifespans in the world. The citations in this section, incidentally, amount to “page so-and-so of some book” and “a study”. I can find “a study” that proves pretty much any nutritional claim. Thumbs down. (Vegetarians should really stay away from human-health arguments. It never makes them look good.)
Of the rest of the arguments Rachels makes, I found “industrial farming is worse than the Holocaust” (yes, he really claims this, making it clear that he means it) particularly ludicrous. Obviously, this argument is made with the express intent of being provocative; but as it does seem that Rachels genuinely believe it to be true, I can’t help but conclude that here is a person who is exemplifying one of the most egregious failure modes of naive utilitarianism. (How many chickens would I sacrifice to save my great-grandfather from the Nazis? N, where N is any number. This seems to argue either for rejecting straightforward aggregation of value or for assigning chickens a value of 0.)
The one universal truth I’ve observed about nutrition claims is that finding someone else who’s making the opposite claim is trivial. (The corollary is that generalizing nutritional findings to all humans in all circumstances is nigh-impossible.)
“Partially hydrogenated vegetable oils prevent heart disease and improve lipid profile”. To the extent that it is true that it is trivial to find someone claiming the opposite of every nutritional claim it is trivial to find people who are clearly just plain wrong. (The position you are taking is far too strong to be tenable.)
The opposite claim of “Food X causes problem Y” is not necessarily “Food X reduces problem Y”. “It is not the case that (or “there is no evidence that”) Food X causes problem Y” also counts as “opposite”. That’s how I meant it: every time someone says “X causes Y”, there’s some other study that concludes that eh, actually, it’s not clear that X causes Y, and in fact probably doesn’t.
SaidAchmiz, one difference between factory farming and the Holocaust is that the Nazis believed in the existence of an international conspiracy of the Jews to destroy the Aryan people. Humanity’s only justification of exploiting and killing nonhuman animals is that we enjoy the taste of their flesh. No one believes that factory-farmed nonhuman animals have done “us” any harm. Perhaps the parallel with the (human) Holocaust fails for another reason. Pigs, for example, are at least as intelligent as prelinguistic toddlers; but are they less sentient? The same genes, neural processes, anatomical pathways and behavioural responses to noxious stimuli are found in pigs and toddlers alike. So I think the burden of proof here lies on meat-eating critics who deny any equivalence. A third possible reason for denying the parallel with the Holocaust is the issue of potential. Pigs (etc) lack the variant of the FOXP2 gene implicated in generative syntax. In consequence, pigs will never match the cognitive capacities of many but not all adult humans. The problem with this argument is that we don’t regard, say, humans with infantile Tay-Sachs who lack the potential to become cognitively mature adults as any less worthy of love, care and respect than heathy toddlers. Indeed the Nazi treatment of congenitally handicapped humans (the “euthanasia” program) is often confused with the Holocaust, for which it provided many of the technical personnel. A fourth reason to deny the parallel with the human Holocaust is that it’s offensive to Jewish people. This unconformable parallel has been drawn by some Jewish writers. “An eternal Treblinka”, for example, was made by Isaac Bashevis Singer—the Jewish-American Nobel laureate. Apt comparison or otherwise, creating nonhuman-animal-friendly intelligence is going to be an immense challenge.
Humanity’s only justification of exploiting and killing nonhuman animals is that we enjoy the taste of their flesh.
It seems to me like a far more relevant justification for exploiting and killing nonhuman animals is “and why shouldn’t we do this...?”, which is the same justification we use for exploiting and killing ore-bearing rocks. Which is to say, there’s no moral problem with doing this, so it needs no “justification”.
Pigs, for example, are at least as intelligent as prelinguistic toddlers; but are they less sentient? The same genes, neural processes, anatomical pathways and behavioural responses to noxious stimuli are found in pigs and toddlers alike. So I think the burden of proof here lies on meat-eating critics who deny any equivalence.
I make it clear in this post that I don’t deny the equivalence, and don’t think that very young children have the moral worth of cognitively developed humans. (The optimal legality of Doing Bad Things to them is a slightly more complicated matter.)
we don’t regard, say, humans with infantile Tay-Sachs who lack the potential to become cognitively mature adults as any less worthy of love, care and respect than heathy toddlers.
Well, I certainly do.
Apt comparison or otherwise, creating nonhuman-animal-friendly intelligence is going to be an immense challenge.
Eh...? Expand on this, please; I’m quite unsure what you mean here.
SaidAchmiz, to treat exploiting and killing nonhuman animals as ethically no different from “exploiting and killing ore-bearing rocks” does not suggest a cognitively ambitious level of empathetic understanding of other subjects of experience. Isn’t there an irony in belonging to an organisation dedicated to the plight of sentient but cognitively humble beings in the imminent face of vastly superior intelligence and claiming that the plight of sentient but cognitively humble beings in the face of vastly superior intelligence is of no ethical consequence whatsoever? Insofar as we want a benign outcome for humans, I’d have thought that the computational equivalent of Godlike capacity for perspective-taking is precisely what we should be aiming for.
Isn’t there an irony in belonging to an organisation dedicated to the plight of sentient but cognitively humble beings in the imminent face of vastly superior intelligence and claiming that the plight of sentient but cognitively humble beings in the face of vastly superior intelligence is of no ethical consequence whatsoever. Insofar as we want a benign outcome for humans, I’d have thought that the computational equivalent of Godlike capacity for perspective-taking is precisely what we should be aiming for.
No. Someone who cares about human-level beings but not animals will care about the plight of humans in the face of an AI, but there’s no reason they must care about the plight of animals in the face of humans, because they didn’t care about animals to begin with.
It may be that the best construction for a friendly AI is some kind of complex perspective taking that lends itself to caring about animals, but this is a fact about the world; it falls on the is side of the is-ought divide.
a cognitively ambitious level of empathetic understanding of other subjects of experience
What the heck does this mean? (And why should I be interested in having it?)
Isn’t there an irony in belonging to an organisation dedicated to the plight of sentient but cognitively humble beings in the imminent face of vastly superior intelligence and claiming that the plight of sentient but cognitively humble beings in the face of vastly superior intelligence is of no ethical consequence whatsoever?
Wikipedia says:
In modern western philosophy, sentience is the ability to experience sensations (known in philosophy of mind as “qualia”).
If that’s how you’re using “sentience”, then:
1) It’s not clear to me that (most) nonhuman animals have this quality; 2) This quality doesn’t seem central to moral worth.
So I see no irony.
If you use “sentience” to mean something else, then by all means clarify.
There are some other problems with your formulation, such as:
1) I don’t “belong to” MIRI (which is the organization you refer to, yes?). I have donated to them, which I suppose counts? 2) Your description of their mission, specifically the implied comparison of an FAI with humans, is inaccurate.
the computational equivalent of Godlike capacity for perspective-taking
You use a lot of terms (“cognitively ambitious”, “cognitively humble”, “empathetic understanding”, “Godlike capacity for perspective-taking” (and “the computation equivalent” thereof)) that I’m not sure how to respond to, because it seems like either these phrases are exceedingly odd ways of referring to familiar concepts, or else they are incoherent and have no referents. I’m not sure which interpretation is dictated by the principle of charity here; I don’t want to just assume that I know what you’re talking about. So, if you please, do clarify what you mean by… any of what you just said.
Well, first of all, this is just false. People do things for the barest, most trivial of reasons all the time. You’re walking along the street and you kick a bottle that happens to turn up in your path. What’s it in for you? In the most trivial sense you could say that “I felt like it” is what’s in it for you, but then the concept rather loses its meaning.
In any case, that’s a tangent, because you mistook my meaning: I wasn’t talking about the motivation for doing something. I (and davidpearce, as I read him) was talking about the moral justification for eating meat. His comment, under my intepretation, was something like: “Exploiting and killing nonhuman animals carries great negative moral value. What moral justification do we have for doing this? (i.e. what positive moral value counterbalances it?) None but that we enjoy the taste of their flesh.” (Implied corollary: and that is inadequate moral justification!)
To which my response was, essentially, that morally neutral acts do not require such justification. (And by implication, I was contradicting davidpearce by claiming that killing and eating animals is a morally neutral act.) If I smash a rock, I don’t need to justify that (unless the rock was someone’s property, I suppose, which is not the issue we’re discussing). I might have any number of motivations for performing a morally neutral act, but they’re none of anyone’s business, and certainly not an issue for moral philosophers.
(Did you really not get all of this intended meaning from my comment...? If that’s how you intepreted what I said, shouldn’t you be objecting that smashing ore-bearing rocks is not, in fact, unmotivated, as I would seem to be implying, under your interpretation?)
“Public declarations would only be signaling, having little to do with maximizing good outcomes.”
On the contrary, trying to influence other people in the AI community to share Eliezer’s (apparent) concern for the suffering of animals is very important, for the reason given by David.
“I am also not aware of any Less Wrong post or sequence establishing (or really even arguing for) your view as the correct one.”
a) Less Wrong doesn’t contain the best content on this topic.
b) Most of the posts disputing whether animal suffering matter are written by un-empathetic non-realists, so we would have to discuss meta-ethics and how to deal with meta-ethical uncertainty to convince them.
c) The reason has been given by Pablo Stafforini—when I directly experience the badness of suffering, I don’t only perceive that suffering is bad for me (or bad for someone with blonde hair, etc), but that suffering would be bad regardless of who experienced it (so long as they did actually have the subjective experience of suffering).
d) Even if there is some uncertainty about whether animal suffering is important, that would still require that it be taken quite seriously; even if there were only a 50% chance that other humans mattered, it would be bad to lock them up in horrible conditions, or signal through my actions to potentially influential people that doing so is OK.
c) The reason has been given by Pablo Stafforini—when I directly experience the badness of suffering, I don’t only perceive that suffering is bad for me (or bad for someone with blonde hair, etc), but that suffering would be bad regardless of who experienced it (so long as they did actually have the subjective experience of suffering).
This is an interesting argument, but it seems a bit truncated. Could you go into more detail?
That has very little to do with whether Eliezer should make public declarations of things. Are you of the opinion that Eliezer does not share your view on this matter? (I don’t know whether he does, personally.) If so, you should be attempting to convince him, I guess. If you think that he already agrees with you, your work is done. Public declarations would only be signaling, having little to do with maximizing good outcomes.
As for the other thing — I should think the fact that we’re having some disagreement in the comments on this very post, about whether animal suffering is important, would be evidence that it’s not quite as uncontroversial as you imply. I am also not aware of any Less Wrong post or sequence establishing (or really even arguing for) your view as the correct one. Perhaps you should write one? I’d be interested in reading it.
I think we should be wary of reasoning that takes the form: “There is no good argument for x on Less Wrong, therefore there are likely no good arguments for x.”
Certainly we should, but that was not my reasoning. What I said was:
I object to treating an issue as settled and uncontroversial when it’s not. And the implication was that if this issue is not settled here, then it’s likely to be even less settled elsewhere; after all, we do have a greater proportion of vegetarians here at Less Wrong than in the general population.
“I will act as if this is a settled issue” in such a case is an attempt to take an epistemic shortcut. You’re skipping the whole part where you actually, you know, argue for your viewpoint, present reasoning and evidence to support it, etc. I would like to think that we don’t resort to such tricks here.
If caring about animal suffering is such a straightforward thing, then please, write a post or two outlining the reasons why. Posters on Less Wrong have convinced us of far weirder things; it’s not as if this isn’t a receptive audience. (Or, if there are such posts and I’ve just missed them, link please. Or! If you think there are very good, LW-quality arguments elsewhere, why not write a Main post with a few links, with maybe brief summaries of each?)
SaidAchmiz, you’re right. The issue isn’t settled: I wish it were so. The Transhumanist Declaration (1998, 2009) of the World Transhumanist Association / Humanity Plus does express a non-anthropocentric commitment to the well-being of all sentience. [“We advocate the well-being of all sentience, including humans, non-human animals, and any future artificial intellects, modified life forms, or other intelligences to which technological and scientific advance may give rise” : http://humanityplus.org/philosophy/transhumanist-declaration/] But I wonder what percentage of lesswrongers would support such a far-reaching statement?
I certainly wouldn’t, and here’s why.
Mentioning “non-human animals” in the same sentence and context along with humans and AIs, and “other intelligences” (implying that non-human animals may be usefully referred to as “intelligences”, i.e. that they are similar to humans along the relevant dimensions here, such as intelligence, reasoning capability, etc.) reads like an attempt to smuggle in a claim by means of that implication. Now, I don’t impute ignoble intent to the writers of that declaration; they may well consider the question settled, and so do not consider themselves to be making any unsupported claims. But there’s clearly a claim hidden in that statement, and I’d like to see it made quite explicit, at least, even if you think it’s not worth arguing for.
That is, of course, apart from my belief that animals do not have intrinsic moral value. (To be truthful, I often find myself more annoyed with bad arguments than wrong beliefs or bad deeds.)
Those who have thought most about this issue, namely professional moral philosophers, generally agree (1) that suffering is bad for creatures of any species and (2) that it’s wrong for people to consume meat and perhaps other animal products (the two claims that seem to be the primary subjects of dispute in this thread). As an anecdote, Jeff McMahan—a leading ethicist and political philosopher—mentioned at a recent conference that the moral case for vegetarianism was one of the easiest cases to make in all philosophy (a discipline where peer disagreement is pervasive).
I mention this, not as evidence that the issue is completely settled, but as a reply to your speculation that there is even more disagreement in the relevant community outside Less Wrong.
Frankly, I’m baffled by your insistence that the relevant arguments must be found in the Less Wrong archives. There’s plenty of good material out there which I’m happy to recommend if you are interested in reading what others who have thought about these issues much more than either of us have written on the subject.
Citation needed. :)
It’s interesting that you use Jeff McMahan as an example. In his essay The Meat Eaters, McMahan makes some excellent arguments; his replies to the “playing God” and “against Nature” objections, for instance, are excellent examples of clear reasoning and argument, as is his commentary on the sacredness of species. (As an aside, when McMahan started talking about the hypothetical modification or extinction of carnivorous species, I immediately thought of Stanislaw Lem’s Return From the Stars, where the human civilization of a century hence has chemically modified all carnivores, including humans, to be nonviolent, evidently having found some way to solve the ecological issues.)
But one thing he doesn’t do is make any argument for why we should care about the suffering of animals. The moral case, as such, goes entirely unmade; McMahan only alludes to its obviousness once or twice. If he thinks it’s an easy case to make — perhaps he should go ahead and make it! (Maybe he does elsewhere? If so, a quick googling does not turn it up. Links, as always, would be appreciated.) He just takes “animal suffering is bad” as an axiom. Well, fair enough, but if I don’t share that axiom, you wouldn’t expect me to be convinced by his arguments, yes?
I don’t think the relevant community outside Less Wrong is professional moral philosophers. I meant something more like… “intellectuals/educated people/technophiles/etc. in general”, and then even more broadly than that, “people in general”. However, this is a peripheral issue, so I’m ok with dropping it.
In case it wasn’t clear (sorry!), yes, I am interested in reading good material elsewhere (preferably in the form of blog posts or articles rather than entire books or long papers, at least as summaries); if you have some to recommend, I’d appreciate it. I just think that if such very convincing material exists, you (or someone) should post it (links or even better, a topic summary/survey) on Less Wrong, such that we, a community with a high level of discourse, may discuss, debate, and examine it.
(FWIW, I’m not the one downvoting your comments, and I think it’s a shame that the debate has become so “politicized”.)
Here are a couple of relevant survey articles:
Jeff McMahan, Animals, in The Blackwell Companion to Applied Ethics, Oxford: Blackwell, 2002, pp. 525-536.
Stuart Rachels, Vegetarianism, in The Oxford Handbook of Animal Ethics, Oxford: Oxford University Press, 2012, pp. 877–905.
On the seriousness of suffering, see perhaps
Thomas Nagel, Pleasure and Pain, in The View from Nowhere, Oxford: Oxford University Press, 1986, pp. 156-162.
--
Here are some quotes about pain from contemporary moral philosophers which I believe are fairly representative. (I don’t have any empirical studies to back this up, other than my impression from interacting with this community for several years, and my inability to find even a single quote that supports the contrary position.)
Guy Kahane, The Sovereignty of Suffering: Reflections on Pain’s Badness, 2004, p. 2
Jamie Mayerfeld, Suffering and Moral Responsibility, Oxford, 2002, p. 111.
John Broome, ‘More Pain or Less?’, Analysis, vol. 56, no. 2 (April, 1996), p. 117
Michael Huemer, Ethical Intuitionism, Basingstoke, Hampshire, 2005, p. 250.
James Rachels, ‘Animals and Ethics’, in Edward Craig (ed.), Routledge Encyclopedia of Philosophy, London, 1998, sect. 3.
Thank you! This is an impressive array of references, and I will read at least some of them as soon as I have time. I very much appreciate you taking the time to collect and post them.
Thank you. The downvotes don’t worry me too much, at least partly because I continue to be unsure about what down/upvotes even mean on this site. (It seems to be an emotivist sort of yay/boo thing? Not that there’s necessarily anything terribly wrong with that, it just doesn’t translate to very useful data, especially in small quantities.)
To anyone who is downvoting my comments: I’d be curious to hear your reasons, if you’re willing to explain them publicly. Though I do understand if you want to remain anonymous.
So, I’ve just finished reading this one.
To say that I found it unconvincing would be quite the understatement.
For one, Rachels seems entirely unwilling to even take seriously any objections to his moral premises or argument (he, again, takes the idea that we should care about animal suffering as given). He dismisses the strongest and most interesting objections outright; he selects the weakest objections to rebut, and condescendingly adds that “Resistance to [such] arguments usually stems from emotion, not reason. … Moreover, they [opponents of his argument] want to justify their next hamburger.”
Rachels then launches into a laundry list of other arguments against eating factory farmed animals, not based on a moral concern for animals. It seems that factory farming is bad in literally every way! It’s bad for animals, it’s bad for people, it causes diseases, eating meat is bad for our health, and more, and more.
(I’m always wary of such claims. When someone tells you thing A has bad effect X, you listen with concern; when they add that oh yeah, it also had bad effect Y! And Z! And W! … and then you discover that their political/ideological alignment is “opponent of thing A”… suspicion creeps in. Can eating meat really just be universally bad, bad in every way, irredeemably bad so as to be completely unmotivated? Well, there’s no law of nature that says that can’t be the case (e.g. eating uranium probably has no upside), but I’m inclined to treat such claims with skepticism, and, in any case, I’d prefer each aspect of meat-eating to be argued against separately, such that I can evaluate them individually, not be faced with a shotgun barrage of everything at once.)
Incidentally, I find the “factory farming is detrimental to local human populations” argument much more convincing than any of the others, certainly far more so than the animal-suffering argument. If the provided facts are accurate, then that’s the most salient case for stopping the practice — or, preferably, reforming it so as to mitigate the environmental and public-health impact.
I assign the “eating meat is bad for you” argument negligible weight. The one universal truth I’ve observed about nutrition claims is that finding someone else who’s making the opposite claim is trivial. (The corollary is that generalizing nutritional findings to all humans in all circumstances is nigh-impossible.) Red meat reduces lifespan? But the peoples of the Caucasus highlands eat almost nothing but red meat, and they’ve got some of the longest lifespans in the world. The citations in this section, incidentally, amount to “page so-and-so of some book” and “a study”. I can find “a study” that proves pretty much any nutritional claim. Thumbs down. (Vegetarians should really stay away from human-health arguments. It never makes them look good.)
Of the rest of the arguments Rachels makes, I found “industrial farming is worse than the Holocaust” (yes, he really claims this, making it clear that he means it) particularly ludicrous. Obviously, this argument is made with the express intent of being provocative; but as it does seem that Rachels genuinely believe it to be true, I can’t help but conclude that here is a person who is exemplifying one of the most egregious failure modes of naive utilitarianism. (How many chickens would I sacrifice to save my great-grandfather from the Nazis? N, where N is any number. This seems to argue either for rejecting straightforward aggregation of value or for assigning chickens a value of 0.)
“Partially hydrogenated vegetable oils prevent heart disease and improve lipid profile”. To the extent that it is true that it is trivial to find someone claiming the opposite of every nutritional claim it is trivial to find people who are clearly just plain wrong. (The position you are taking is far too strong to be tenable.)
The opposite claim of “Food X causes problem Y” is not necessarily “Food X reduces problem Y”. “It is not the case that (or “there is no evidence that”) Food X causes problem Y” also counts as “opposite”. That’s how I meant it: every time someone says “X causes Y”, there’s some other study that concludes that eh, actually, it’s not clear that X causes Y, and in fact probably doesn’t.
SaidAchmiz, one difference between factory farming and the Holocaust is that the Nazis believed in the existence of an international conspiracy of the Jews to destroy the Aryan people. Humanity’s only justification of exploiting and killing nonhuman animals is that we enjoy the taste of their flesh. No one believes that factory-farmed nonhuman animals have done “us” any harm. Perhaps the parallel with the (human) Holocaust fails for another reason. Pigs, for example, are at least as intelligent as prelinguistic toddlers; but are they less sentient? The same genes, neural processes, anatomical pathways and behavioural responses to noxious stimuli are found in pigs and toddlers alike. So I think the burden of proof here lies on meat-eating critics who deny any equivalence. A third possible reason for denying the parallel with the Holocaust is the issue of potential. Pigs (etc) lack the variant of the FOXP2 gene implicated in generative syntax. In consequence, pigs will never match the cognitive capacities of many but not all adult humans. The problem with this argument is that we don’t regard, say, humans with infantile Tay-Sachs who lack the potential to become cognitively mature adults as any less worthy of love, care and respect than heathy toddlers. Indeed the Nazi treatment of congenitally handicapped humans (the “euthanasia” program) is often confused with the Holocaust, for which it provided many of the technical personnel. A fourth reason to deny the parallel with the human Holocaust is that it’s offensive to Jewish people. This unconformable parallel has been drawn by some Jewish writers. “An eternal Treblinka”, for example, was made by Isaac Bashevis Singer—the Jewish-American Nobel laureate. Apt comparison or otherwise, creating nonhuman-animal-friendly intelligence is going to be an immense challenge.
It seems to me like a far more relevant justification for exploiting and killing nonhuman animals is “and why shouldn’t we do this...?”, which is the same justification we use for exploiting and killing ore-bearing rocks. Which is to say, there’s no moral problem with doing this, so it needs no “justification”.
I make it clear in this post that I don’t deny the equivalence, and don’t think that very young children have the moral worth of cognitively developed humans. (The optimal legality of Doing Bad Things to them is a slightly more complicated matter.)
Well, I certainly do.
Eh...? Expand on this, please; I’m quite unsure what you mean here.
SaidAchmiz, to treat exploiting and killing nonhuman animals as ethically no different from “exploiting and killing ore-bearing rocks” does not suggest a cognitively ambitious level of empathetic understanding of other subjects of experience. Isn’t there an irony in belonging to an organisation dedicated to the plight of sentient but cognitively humble beings in the imminent face of vastly superior intelligence and claiming that the plight of sentient but cognitively humble beings in the face of vastly superior intelligence is of no ethical consequence whatsoever? Insofar as we want a benign outcome for humans, I’d have thought that the computational equivalent of Godlike capacity for perspective-taking is precisely what we should be aiming for.
No. Someone who cares about human-level beings but not animals will care about the plight of humans in the face of an AI, but there’s no reason they must care about the plight of animals in the face of humans, because they didn’t care about animals to begin with.
It may be that the best construction for a friendly AI is some kind of complex perspective taking that lends itself to caring about animals, but this is a fact about the world; it falls on the is side of the is-ought divide.
What the heck does this mean? (And why should I be interested in having it?)
Wikipedia says:
If that’s how you’re using “sentience”, then:
1) It’s not clear to me that (most) nonhuman animals have this quality;
2) This quality doesn’t seem central to moral worth.
So I see no irony.
If you use “sentience” to mean something else, then by all means clarify.
There are some other problems with your formulation, such as:
1) I don’t “belong to” MIRI (which is the organization you refer to, yes?). I have donated to them, which I suppose counts?
2) Your description of their mission, specifically the implied comparison of an FAI with humans, is inaccurate.
You use a lot of terms (“cognitively ambitious”, “cognitively humble”, “empathetic understanding”, “Godlike capacity for perspective-taking” (and “the computation equivalent” thereof)) that I’m not sure how to respond to, because it seems like either these phrases are exceedingly odd ways of referring to familiar concepts, or else they are incoherent and have no referents. I’m not sure which interpretation is dictated by the principle of charity here; I don’t want to just assume that I know what you’re talking about. So, if you please, do clarify what you mean by… any of what you just said.
Huh, no, you don’t normally go out of your way to do stuff unless there’s something in it for you or someone else.
Well, first of all, this is just false. People do things for the barest, most trivial of reasons all the time. You’re walking along the street and you kick a bottle that happens to turn up in your path. What’s it in for you? In the most trivial sense you could say that “I felt like it” is what’s in it for you, but then the concept rather loses its meaning.
In any case, that’s a tangent, because you mistook my meaning: I wasn’t talking about the motivation for doing something. I (and davidpearce, as I read him) was talking about the moral justification for eating meat. His comment, under my intepretation, was something like: “Exploiting and killing nonhuman animals carries great negative moral value. What moral justification do we have for doing this? (i.e. what positive moral value counterbalances it?) None but that we enjoy the taste of their flesh.” (Implied corollary: and that is inadequate moral justification!)
To which my response was, essentially, that morally neutral acts do not require such justification. (And by implication, I was contradicting davidpearce by claiming that killing and eating animals is a morally neutral act.) If I smash a rock, I don’t need to justify that (unless the rock was someone’s property, I suppose, which is not the issue we’re discussing). I might have any number of motivations for performing a morally neutral act, but they’re none of anyone’s business, and certainly not an issue for moral philosophers.
(Did you really not get all of this intended meaning from my comment...? If that’s how you intepreted what I said, shouldn’t you be objecting that smashing ore-bearing rocks is not, in fact, unmotivated, as I would seem to be implying, under your interpretation?)
“Public declarations would only be signaling, having little to do with maximizing good outcomes.”
On the contrary, trying to influence other people in the AI community to share Eliezer’s (apparent) concern for the suffering of animals is very important, for the reason given by David.
“I am also not aware of any Less Wrong post or sequence establishing (or really even arguing for) your view as the correct one.”
a) Less Wrong doesn’t contain the best content on this topic. b) Most of the posts disputing whether animal suffering matter are written by un-empathetic non-realists, so we would have to discuss meta-ethics and how to deal with meta-ethical uncertainty to convince them. c) The reason has been given by Pablo Stafforini—when I directly experience the badness of suffering, I don’t only perceive that suffering is bad for me (or bad for someone with blonde hair, etc), but that suffering would be bad regardless of who experienced it (so long as they did actually have the subjective experience of suffering). d) Even if there is some uncertainty about whether animal suffering is important, that would still require that it be taken quite seriously; even if there were only a 50% chance that other humans mattered, it would be bad to lock them up in horrible conditions, or signal through my actions to potentially influential people that doing so is OK.
This is an interesting argument, but it seems a bit truncated. Could you go into more detail?
Where is the best content on this topic, in your opinion?
Eh? Unpack this, please.