An important question is whether there is a net loss or gain of sentient life by avoiding eating meat. Or, if there is a substitution between different sentient life-forms, is there a net gain to quality of life?
Do we know where the biomass that currently goes into farmed animals would end up if we stopped using farmed animals? Would it go into humans, or into vehicles (biofuels) or into wildlife via land taken out of agricultural production?
Should we assume that farmed animals have a negative quality of life (so that in utilitarian terms, the world would be better if they stopped existing and weren’t replaced by other sentient beings)? The animals themselves would probably not assess their lives as having negative value (as far as I’m aware, farmed animals do not attempt to commit suicide at every available opportunity).
Do farmed animals have a lower quality of life than animals living in the wild? Remember that nature is not a nice place either...
My personal guess is that without meat, we would end up with more humans, though mostly poorer humans. Since even the poorest humans would probably have a higher quality of life than the animals they substituted, it looks like a net gain from the point of view of total utility. But whether that is really a good thing or not may depend on whether you are a total utilitarian or an average utilitarian.
(as far as I’m aware, farmed animals do not attempt to commit suicide at every available opportunity)
I object to this as the general metric for “should a life be brought into existence?” (I’m something approximating an average utilitarian. To the extent that I’m a total utilitarian, I think Eliezer’s post about Lives Worth Celebrating is relevant)
Also, less controversial, I’d like to note that factory-farmed animals really don’t have much opportunity to end their own lives even if they wanted to.
For that matter, even if they did have the opportunity, livestock species may not have the abstract reasoning abilities to recognize that suicide is even a possible thing.
Pigs might have the intelligence for that, but for cows and chickens, I doubt it. It’s not like suicide is an evolutionarily favorable adaptation, it’s a product of abstract reasoning about death that most animals are not likely to be be capable of.
Good points, but I suspect they are dominated by another part of the calculation: In the future, with advanced technology, we might be able to seed live on other planets or even simulate ecosystems. By getting people now to care about suffering in nonhumans, we make it more likely that future generations care for them as well. And antispeciesism also seems closely related to anti-substratism (e.g. caring about the simulation of humans, even though they’re not carbon-based).
If you are the sort of person that cares about all sorts of suffering, raising antispeciesist awareness might be very positive for far future-related reasons, regardless of whether the direct (short-term) impact is actually positive, neutral, or even slightly negative.
The other long-term consideration is that whatever we do to animals, AIs may well do to us.
We don’t want future AIs raising us in cramped cages, purely for their own amusement, on the grounds that their utility is much more important than ours. But we also don’t want them to exterminate us on “compassionate” grounds. (Those poor humans, why let them suffer so? Let’s replace them by a few more happy, wire-heading AIs like us!)
Don’t many/most people here want there to be posthumans, which may well cross the species-barrier? I don’t think there is an “essence of humanity” that carries over from humans to posthumans by virtue of descendance, so that case seems somewhat analogous to the wireheading AIs case already. And whether the AI would do wireheading or keep intact a preference architecture depends on what we/it values. If we do value complex preferences, and if we want to have many beings in the world that have them mostly fulfilled, I’d assume there would be more awesome or more effective ways of design than current humans However, if this view implies that killing is bad because it violates preferences, then replacement would, to some extent, be a bad thing and the AI might not do it.
That argument would seem to apply to plants or even to non-intelligent machines as well as to animals, unless you include a missing premise stating that AI/human interaction is similar to human/animal interaction in a way that 1) human/plant or human/washing machine interaction is not, and 2) is relevant. Any such missing premise would basically be an entire argument for vegetarianism already—the “in comparison to AIs” part of the argument is an insubstantial gloss on it.
Furthermore, why would you expect what we do to constrain what AIs do anyway? I’d sooner expect that AIs would do things to us based on their own reasons regardless of what we do to other targets.
Perhaps this is true if the AI is supremely intelligent, but if the AI is only an order of magnitude for intelligent than us, or better by some other metric, the way we treat animals could be significant.
More relevantly, if an AI is learning anything at all about morality from us or from the people programming it I think it is extremely wise that the relevant individuals involved be vegan for these reasons (better safe than sorry). Essentially I argue that there is a very significant chance the way we treat other animals could be relevant to how an AI treats us (better treatment corresponding to better later outcomes for us).
“Other animals” is a gerrymandered reference class. Why would the AI specifically care about how we treat “other animals”, as opposed to “other biological entities”, “other multicellular beings”, or “other beings who can do mathematics”?
That’s the kind of thing I was objecting to. “‘Other animals’ are capable of feeling pain” is an independent argument for vegetarianism. Adding the AI to the argument doesn’t really get you anything, since the AI shouldn’t care about it unless it was useful as an argument for vegetarianism without the AI.
It’s also still a gerrymandered reference class. “The AI cares about how we treat other beings that feel pain” is just as arbitrary as “the AI cares about how we treat ‘other animals’”—by explaining the latter in terms of the former, you’re just explaining one arbitrary category by pointing out that it fits into another arbitrary category. Why doesn’t the AI care about how we treat all beings who can do mathematics (or are capable of being taught mathematics), or how we treat all beings at least as smart as ourselves, or how we treat all beings that are at least 1⁄3 the intelligence of ourselves, or even how we treat all mammals or all machines or all lesser AIs?
Perhaps it should. Being vegan covers all these bases except machines/AIs, which arguably (including by me) also ought to hold some non-negligible moral weight.
The question is really “why does the AI have that exact limit”. Phrased in terms of classes, it’s “why does the AI have that specific class”; having another class that includes it doesn’t count, since it doesn’t have the same limit.
After significant reflection what I’m trying to say is that I think it is obvious that non-human animals experience suffering and that this suffering carries moral weight (we would call most modern conditions torture and other related words if the methods were applied to humans).
Furthermore, there are a lot of edge cases of humanity where people can’t learn mathematics or otherwise are substantially less smart than non-human animals (the young, if future potential doesn’t matter that much; or the very old, mentally disabled, people in comas, etc.). I would prefer to live in a world where an AI thinks beings that do suffer but aren’t necessarily sufficient smart matter in general. I would also rather the people designing said AIs agree with this.
I would prefer to live in a world where an AI thinks beings that do suffer but aren’t necessarily sufficient smart matter in general. I would also rather the people designing said AIs agree with this.
But the original argument is that we shouldn’t eat animals because AIs would treat us like we treat animals. That argument implies an AI whose ethical system can’t be specified or controlled in detail, so we have to worry how the AI would treat us.
If you have enough control over the ethics used by the AI that you can design the AI to care about suffering, then this argument doesn’t show a real problem—if you could program the AI to care about suffering, surely you could just program it to directly care about humans. Then we could eat as many animals as we want and the AI still wouldn’t use that as a basis to mistreat us.
Yes, I guess I was operating under the assumption that we would not be able to constrain the ethics of a sufficiently advanced AI at all by simple programming methods.
Though I’ve spend an extraordinarily large amount of time lurking on this and similar sites, upon reflection I’m probably not the best poised person to carry out a debate about the hypothetical values of an AI as depending on ours. And indeed this would not be my primary justification for avoiding nonhuman suffering. I still think its avoidance is an incredibly important and effect meme to propagate culturally.
Jainism has a remarkably wide concept of creatures not to be harmed (e.g. specifically including insects). I don’t see why are you so focused on the diet.
A difficulty of utilitarianism is the question of felicific exchange rates. If you cast morality as a utility function then you are obliged to come up with answers to bizarre hypothetical questions like how many ice-creams is the life of your first born worth because you have defined the right in terms of maximized utility.
If you cast morality as a dispute avoidance mechanism between social agents possessed with power and desire then you are less likely to end up in this kind of dead-end but the price of this casting is the recognition that different agents will have different values and that objectivity of morals is not always possible.
Agreed, but the OP was talking about “effective altruism” , rather than about “effective morality” in general. It’s difficult to talk about altruism at all except within some sort of consequentialist framework. And while there is no simple way of comparing goods, consideration of “effective” altruism (how much good can I do for a relatively small amount of money?) does force us to look at and make very difficult tradeoffs between different goods.
Incidentally, I generally subscribe to rule consequentialism though without any simple utility function, and for much the reasons you discuss. Avoiding vicious disputes between social agents with different values is, as I understand it, one of the “good things” that a system of moral rules needs to achieve.
Rule consequentialism is what a call a multi-threaded moral theory—a blend of deontology and consequentialism if you will. I advocate multi-threaded theories. The idea that there is a correct single-threaded theory of morality seems implausible. Moral rules to me are a subset of modal rules for survival-focused agents.
To work out if something is right run a bunch of ‘algorithms’ (in parallel threads if you like) not just one. (No commitment made to Turing computability of said ‘algorithms’ though...)
So...
#assume virtue ethics
If I do X what virtues does this display/exhibit?
#assume categorical imperative
If everyone does X how would I value the world then?
#assume principle of utility
Will X increase the greatest happiness for the greatest number?
#assume golden rule
If X were done to me instead of my doing X would I accept this?
#emotions
If I do X will this trigger any emotional reaction (disgust, guilt, shame, embarrassment, joy, ecstasy, triumph etc)
#laws
Is there is law or sanction if I do X?
#precedent
Have I done X before, how did that go?
#relationships
If I do X what impact will that have on relationships I have?
#motives goal
Do I want to do X?
#interest welfare prudence
Is X in my interest? Safe? Dangerous etc
#value
Does X have value? To me, to others etc
Sometimes one or two reasons will provide a slam dunk decision. It’s illegal and I don’t want to do it anyway. Othertimes, the call is harder.
Personally, I find a range of considerations more persuasive than one. I am personally inclined to sentimentalism at the meta-ethical tier and particularism at the normative and applied ethical tiers.
Of course, strictly speaking particularism implies that normative ethical theories are false over-generalizations and that a theory of reasons rests on a theory of values. Values are fundamentally emotive. No amount of post hoc moral rationalization will change that.
An important question is whether there is a net loss or gain of sentient life by avoiding eating meat. Or, if there is a substitution between different sentient life-forms, is there a net gain to quality of life?
Do we know where the biomass that currently goes into farmed animals would end up if we stopped using farmed animals? Would it go into humans, or into vehicles (biofuels) or into wildlife via land taken out of agricultural production?
Should we assume that farmed animals have a negative quality of life (so that in utilitarian terms, the world would be better if they stopped existing and weren’t replaced by other sentient beings)? The animals themselves would probably not assess their lives as having negative value (as far as I’m aware, farmed animals do not attempt to commit suicide at every available opportunity).
Do farmed animals have a lower quality of life than animals living in the wild? Remember that nature is not a nice place either...
My personal guess is that without meat, we would end up with more humans, though mostly poorer humans. Since even the poorest humans would probably have a higher quality of life than the animals they substituted, it looks like a net gain from the point of view of total utility. But whether that is really a good thing or not may depend on whether you are a total utilitarian or an average utilitarian.
I object to this as the general metric for “should a life be brought into existence?” (I’m something approximating an average utilitarian. To the extent that I’m a total utilitarian, I think Eliezer’s post about Lives Worth Celebrating is relevant)
Also, less controversial, I’d like to note that factory-farmed animals really don’t have much opportunity to end their own lives even if they wanted to.
For that matter, even if they did have the opportunity, livestock species may not have the abstract reasoning abilities to recognize that suicide is even a possible thing.
Pigs might have the intelligence for that, but for cows and chickens, I doubt it. It’s not like suicide is an evolutionarily favorable adaptation, it’s a product of abstract reasoning about death that most animals are not likely to be be capable of.
Good points, but I suspect they are dominated by another part of the calculation: In the future, with advanced technology, we might be able to seed live on other planets or even simulate ecosystems. By getting people now to care about suffering in nonhumans, we make it more likely that future generations care for them as well. And antispeciesism also seems closely related to anti-substratism (e.g. caring about the simulation of humans, even though they’re not carbon-based).
If you are the sort of person that cares about all sorts of suffering, raising antispeciesist awareness might be very positive for far future-related reasons, regardless of whether the direct (short-term) impact is actually positive, neutral, or even slightly negative.
The other long-term consideration is that whatever we do to animals, AIs may well do to us.
We don’t want future AIs raising us in cramped cages, purely for their own amusement, on the grounds that their utility is much more important than ours. But we also don’t want them to exterminate us on “compassionate” grounds. (Those poor humans, why let them suffer so? Let’s replace them by a few more happy, wire-heading AIs like us!)
Don’t many/most people here want there to be posthumans, which may well cross the species-barrier? I don’t think there is an “essence of humanity” that carries over from humans to posthumans by virtue of descendance, so that case seems somewhat analogous to the wireheading AIs case already. And whether the AI would do wireheading or keep intact a preference architecture depends on what we/it values. If we do value complex preferences, and if we want to have many beings in the world that have them mostly fulfilled, I’d assume there would be more awesome or more effective ways of design than current humans However, if this view implies that killing is bad because it violates preferences, then replacement would, to some extent, be a bad thing and the AI might not do it.
That argument would seem to apply to plants or even to non-intelligent machines as well as to animals, unless you include a missing premise stating that AI/human interaction is similar to human/animal interaction in a way that 1) human/plant or human/washing machine interaction is not, and 2) is relevant. Any such missing premise would basically be an entire argument for vegetarianism already—the “in comparison to AIs” part of the argument is an insubstantial gloss on it.
Furthermore, why would you expect what we do to constrain what AIs do anyway? I’d sooner expect that AIs would do things to us based on their own reasons regardless of what we do to other targets.
Perhaps this is true if the AI is supremely intelligent, but if the AI is only an order of magnitude for intelligent than us, or better by some other metric, the way we treat animals could be significant.
More relevantly, if an AI is learning anything at all about morality from us or from the people programming it I think it is extremely wise that the relevant individuals involved be vegan for these reasons (better safe than sorry). Essentially I argue that there is a very significant chance the way we treat other animals could be relevant to how an AI treats us (better treatment corresponding to better later outcomes for us).
“Other animals” is a gerrymandered reference class. Why would the AI specifically care about how we treat “other animals”, as opposed to “other biological entities”, “other multicellular beings”, or “other beings who can do mathematics”?
Because other animals are also sentient beings capable of feeling pain. Other multicellular beings aren’t in general.
That’s the kind of thing I was objecting to. “‘Other animals’ are capable of feeling pain” is an independent argument for vegetarianism. Adding the AI to the argument doesn’t really get you anything, since the AI shouldn’t care about it unless it was useful as an argument for vegetarianism without the AI.
It’s also still a gerrymandered reference class. “The AI cares about how we treat other beings that feel pain” is just as arbitrary as “the AI cares about how we treat ‘other animals’”—by explaining the latter in terms of the former, you’re just explaining one arbitrary category by pointing out that it fits into another arbitrary category. Why doesn’t the AI care about how we treat all beings who can do mathematics (or are capable of being taught mathematics), or how we treat all beings at least as smart as ourselves, or how we treat all beings that are at least 1⁄3 the intelligence of ourselves, or even how we treat all mammals or all machines or all lesser AIs?
Heh.
Have you been nice to your smartphone today? Treat your laptop with sufficient respect?
DID YOU EVER LET YOUR TAMAGOTCHI DIE?
Perhaps it should. Being vegan covers all these bases except machines/AIs, which arguably (including by me) also ought to hold some non-negligible moral weight.
The question is really “why does the AI have that exact limit”. Phrased in terms of classes, it’s “why does the AI have that specific class”; having another class that includes it doesn’t count, since it doesn’t have the same limit.
After significant reflection what I’m trying to say is that I think it is obvious that non-human animals experience suffering and that this suffering carries moral weight (we would call most modern conditions torture and other related words if the methods were applied to humans).
Furthermore, there are a lot of edge cases of humanity where people can’t learn mathematics or otherwise are substantially less smart than non-human animals (the young, if future potential doesn’t matter that much; or the very old, mentally disabled, people in comas, etc.). I would prefer to live in a world where an AI thinks beings that do suffer but aren’t necessarily sufficient smart matter in general. I would also rather the people designing said AIs agree with this.
But the original argument is that we shouldn’t eat animals because AIs would treat us like we treat animals. That argument implies an AI whose ethical system can’t be specified or controlled in detail, so we have to worry how the AI would treat us.
If you have enough control over the ethics used by the AI that you can design the AI to care about suffering, then this argument doesn’t show a real problem—if you could program the AI to care about suffering, surely you could just program it to directly care about humans. Then we could eat as many animals as we want and the AI still wouldn’t use that as a basis to mistreat us.
Yes, I guess I was operating under the assumption that we would not be able to constrain the ethics of a sufficiently advanced AI at all by simple programming methods.
Though I’ve spend an extraordinarily large amount of time lurking on this and similar sites, upon reflection I’m probably not the best poised person to carry out a debate about the hypothetical values of an AI as depending on ours. And indeed this would not be my primary justification for avoiding nonhuman suffering. I still think its avoidance is an incredibly important and effect meme to propagate culturally.
Go start recruiting Jains as AI researchers… X-/
I don’t see why. Jainism is far from the only philosophy associated with veganism.
Jainism has a remarkably wide concept of creatures not to be harmed (e.g. specifically including insects). I don’t see why are you so focused on the diet.
Vegans as a general category don’t unnecessarily harm and certainly don’t eat insects either. I’m not just focused on the diet actually.
Come to think of it, what are we even arguing about at this point? I didn’t understand your emoticon there and got thrown off by it.
I’m yet to meet a first-world vegan who would look benevolently at a mosquito sucking blood out of her.
I don’t think we’re arguing at all. That, of course, doesn’t mean that we agree.
The emoticon hinted that I wasn’t entirely serious.
This rather assumes we’re striving for as many lives as possible, does it not?
I mean, that’s a defensible position, but I don’t think it should be assumed.
A difficulty of utilitarianism is the question of felicific exchange rates. If you cast morality as a utility function then you are obliged to come up with answers to bizarre hypothetical questions like how many ice-creams is the life of your first born worth because you have defined the right in terms of maximized utility.
If you cast morality as a dispute avoidance mechanism between social agents possessed with power and desire then you are less likely to end up in this kind of dead-end but the price of this casting is the recognition that different agents will have different values and that objectivity of morals is not always possible.
Agreed, but the OP was talking about “effective altruism” , rather than about “effective morality” in general. It’s difficult to talk about altruism at all except within some sort of consequentialist framework. And while there is no simple way of comparing goods, consideration of “effective” altruism (how much good can I do for a relatively small amount of money?) does force us to look at and make very difficult tradeoffs between different goods.
Incidentally, I generally subscribe to rule consequentialism though without any simple utility function, and for much the reasons you discuss. Avoiding vicious disputes between social agents with different values is, as I understand it, one of the “good things” that a system of moral rules needs to achieve.
Rule consequentialism is what a call a multi-threaded moral theory—a blend of deontology and consequentialism if you will. I advocate multi-threaded theories. The idea that there is a correct single-threaded theory of morality seems implausible. Moral rules to me are a subset of modal rules for survival-focused agents.
To work out if something is right run a bunch of ‘algorithms’ (in parallel threads if you like) not just one. (No commitment made to Turing computability of said ‘algorithms’ though...)
So...
#assume virtue ethics
If I do X what virtues does this display/exhibit?
#assume categorical imperative
If everyone does X how would I value the world then?
#assume principle of utility
Will X increase the greatest happiness for the greatest number?
#assume golden rule
If X were done to me instead of my doing X would I accept this?
#emotions
If I do X will this trigger any emotional reaction (disgust, guilt, shame, embarrassment, joy, ecstasy, triumph etc)
#laws
Is there is law or sanction if I do X?
#precedent
Have I done X before, how did that go?
#relationships
If I do X what impact will that have on relationships I have?
#motives goal
Do I want to do X?
#interest welfare prudence
Is X in my interest? Safe? Dangerous etc
#value
Does X have value? To me, to others etc
Sometimes one or two reasons will provide a slam dunk decision. It’s illegal and I don’t want to do it anyway. Othertimes, the call is harder.
Personally, I find a range of considerations more persuasive than one. I am personally inclined to sentimentalism at the meta-ethical tier and particularism at the normative and applied ethical tiers.
Of course, strictly speaking particularism implies that normative ethical theories are false over-generalizations and that a theory of reasons rests on a theory of values. Values are fundamentally emotive. No amount of post hoc moral rationalization will change that.