b) Not all animals matter, some extremely simple animals don’t matter.
c) There are anti-correlations of the form: the more cows there are, the fewer insects (or rodents) there are. - that hold true in our world in which one of the species is substantially more cognitively capable than the other.
d) There is currently no consensus on how simple a mind or cognitive system has to be for it to matter. Arguably this consensus cannot be reached, since different hypotheses will use different correlates to try and find the morally worthy thing.
Conclusion
e) We do not know now, and won’t know in the medium future whether increasing or decreasing consumption of farm animals is desirable from an utilitarian perspective.
I don’t know where this argument fails. I’ve shown it to many EA’s and no one saw a big problem so far. However, some people think this is just stupid, and I’m happy to see it proven wrong.
To say the same in less abstract form: World 1 and World 2 have the same amount of land. In World 1, people eat cows and raise cows, so there are 100 cows and 1000 insects. In World 2 people are vegan and there is more forest land, World 2 has 10000 insects and no cows. Which world is ethically better seems to hinge on the comparative moral worth of insects and cows. Given we don’t know what it’s like to be a bat, or a cow, or a bumblebee, we cannot decide which world is ethically more desirable. Therefore we have no reason to direct our actions to make our world more like 1 or 2.
You can change insects (or rodents) and cows for any pair of animals that are anti-correlated in nature, and cognitively dissimilar, and where the size of the anti-correlation is larger than your certainty about which animal is more morally worthy.
Animals that matter may be better off not existing than living in suffering. Cows having greater moral worth might be exactly that which means you would rather not create and torment them, as opposed to insects.
Drethelin and Jiro, I was taking for granted—because it is a common opinion among vegans—that cows lives are morally unworthy, and that if insects lives are like anything at all, they are awful.
The reasoning works in all the four cases. Animal 1 has + life and animal 2 + life and they are anti correlated in nature, both are negative and anti-correlated, or when the signals are different 1 is + 2 is—and they are positively correlated.
EDIT: After discussing this here I had a long discussion over email about this with two EA’s, and decided to put forth my final arguments:
I’ll give my best shot. It is also my final shot. If it is not persuasive, I may give up on the task entirely. (because I have a book on altruism and the world has a Superintelligence coming soon, and I feel we are reaching marginal levels of opinion change)
1) My argument is heavily reliant on the idea that any attempt to disrupt the static friction of whatever food habits people do have will require a lot of momentum. So the fact that we are stuck in a random local non-optimum is a feature of the argument, not a failure. Same would go against people who were arguing in favor of speaking Esperanto, or re-establishing the rationalist community in Palau (there are some posts about this around). It’s the static friction that matters. I think we are stuck with Qwerty until the Intelligence Explosion. I think we are stuck with some distribution of vegans, vegetarians and causal-assassins until the IE.
2) I’m not out to end veganism and vegetarianism on the grounds that there is a lot of uncertainty that won’t be solved before the IE. I’m just out to end high status people within the EA community trying to make people change in either direction. I’m out to save EA time and attention. I’m out on this topic because I’ve seen countless hours of discussions between really smart productive awesome world-saving people, in Brazil, the UK and the US where time was being dedicated to this, as if it were super clear cut net good, when it in fact isn’t. Not as good as increasing insight, coordination, cooperation, control, safety savvy AI tech, building community, getting the order right, differential progress or fundraising, for instance.
3) Maybe I should not worry. Maybe there are cheaper hours than those dedicated to veganism to put to good use among the high intellectuals. But dietary habits have a 2 thousand year old tradition of being used as shibboleths, as implicit markers that determine friend from foe. And once again, as I told you here: As long as both teams continue in this lifelong quest together, and as long as both shut up and multiply, it doesn’t matter. At the end of the day, we (have reason to) act alike. I just want to make sure that we get as many as possible, as strong as possible, and set the controls for the heart of the sun.
4) Because I’m out against public veganism advocacy within EAs, LWs, CFARs, I’ve advocated in the past for Private Veganism, for outsourcing vegetarianism, and even, on the same post, for pandemizing veganism, like a vaccine in the water supply. I like animals. I just like future animals as much as I like current animals, so if animals are stealing attention away from my FAI friends, like you guys, I’ll make my stand “against” them, for them.
5) Bostrom puts it clearly. I cite Peter Singer (forthcoming) (I can’t cite him here on LW because it’s unpublished, sorry)
Ommitted text in which Singer quotes Bostrom.
(my emphasis)
The point here being that presentations or posts within the EA community do not increase number of EA’s, they only scatter EA time, which is in part—along with the redbull+rockstar drink—why I felt so averse to that seemingly harmless presentation.
6) Yudkowsky puts it ironically: Eliezer Yudkowsky
“Okay, so all of those risks should affect 4e20 stars which should beat the present value of all human and animal life on the surface of one planet making inefficient use of around a millionth of the output of one star. I do understand that this perspective may sear away some people’s souls, but in reality we are a tiny little blue speck containing a little tribe of tiny people (and animals), a tiny blue speck from which hangs, downward in Time, a vast heavily-populated world. A world of people who are helpless, who have no voices that can move upward and reach the tiny blue speck, who can only look up desperately up at the tiny blue speck and hope we don’t screw up, because if that tiny blue speck snaps, their whole huge world will drop out of Time into the void of never-existed. The joys and sorrows of the village of tiny people and animals on that tiny blue speck don’t matter very much compared to the sheer terror of dropping the entire heavily-populated civilization that is, somehow, hanging from that tiny blue speck.
They cannot speak for themselves, so I try to speak for them.”
That is my case against public advocacy of dietary habits on moral grounds, it is similar, though shorter-sighted, to Paul’s “Against Moral Advocacy” at Rational Altruist.
I do not have intentions of pascal mugging, or pascal wagering, and I’m willing and able to change my mind about these topics. But I find the force of these arguments (those here plus the one on top) to be overwhelming.
Do we know what the relative moral worth of cows and insects is? No. But we can make a best guess based on the available evidence, the same way we do with any other kind of uncertainty. It seems to me like this argument is just “we can’t be certain about anything, therefore we have no basis on which to choose one action over another”, dressed up a little.
No I don’t, it can be subjective and the argument still goes through.
If you comply with the VNM axioms then you have an (effective) utility function and so that moral worth is calculable. And if you don’t follow those axioms you get dutch-booked.
it can be subjective and the argument still goes through.
How does that work? Alice thinks the moral worth of a cow is high enough not to mistreat or eat it. Bob thinks that the moral worth of a cow is zero and cares only about the quality of his steak. How are you going to reconcile their views (or even estimates)?
I’m not. Each of them has a moral opinion and knows what they believe to be the right action. Their disagreement is an ordinary moral disagreement; there are plenty of other moral questions where there is no consensus.
Sure, but then there is no problem in knowing, ever. You said “we don’t know, but we can make an estimate” and with respect to my personal opinion about how delicious a certain food is, I have immediate direct knowledge and no need for estimates.
A few premises
a) Some animals matter
b) Not all animals matter, some extremely simple animals don’t matter.
c) There are anti-correlations of the form: the more cows there are, the fewer insects (or rodents) there are. - that hold true in our world in which one of the species is substantially more cognitively capable than the other.
d) There is currently no consensus on how simple a mind or cognitive system has to be for it to matter. Arguably this consensus cannot be reached, since different hypotheses will use different correlates to try and find the morally worthy thing.
Conclusion
e) We do not know now, and won’t know in the medium future whether increasing or decreasing consumption of farm animals is desirable from an utilitarian perspective.
I don’t know where this argument fails. I’ve shown it to many EA’s and no one saw a big problem so far. However, some people think this is just stupid, and I’m happy to see it proven wrong.
To say the same in less abstract form: World 1 and World 2 have the same amount of land. In World 1, people eat cows and raise cows, so there are 100 cows and 1000 insects. In World 2 people are vegan and there is more forest land, World 2 has 10000 insects and no cows. Which world is ethically better seems to hinge on the comparative moral worth of insects and cows. Given we don’t know what it’s like to be a bat, or a cow, or a bumblebee, we cannot decide which world is ethically more desirable. Therefore we have no reason to direct our actions to make our world more like 1 or 2.
You can change insects (or rodents) and cows for any pair of animals that are anti-correlated in nature, and cognitively dissimilar, and where the size of the anti-correlation is larger than your certainty about which animal is more morally worthy.
Animals that matter may be better off not existing than living in suffering. Cows having greater moral worth might be exactly that which means you would rather not create and torment them, as opposed to insects.
On the other hand, see http://foundational-research.org/publications/importance-of-wild-animal-suffering/ , which argues that insects have lives with negative value because of suffering. In that case, World 2 is strictly worse than world 1: fewer cows (who do have lives with value) and more insects (whose lives have negative value).
Drethelin and Jiro, I was taking for granted—because it is a common opinion among vegans—that cows lives are morally unworthy, and that if insects lives are like anything at all, they are awful.
The reasoning works in all the four cases. Animal 1 has + life and animal 2 + life and they are anti correlated in nature, both are negative and anti-correlated, or when the signals are different 1 is + 2 is—and they are positively correlated.
EDIT: After discussing this here I had a long discussion over email about this with two EA’s, and decided to put forth my final arguments:
I’ll give my best shot. It is also my final shot. If it is not persuasive, I may give up on the task entirely. (because I have a book on altruism and the world has a Superintelligence coming soon, and I feel we are reaching marginal levels of opinion change)
1) My argument is heavily reliant on the idea that any attempt to disrupt the static friction of whatever food habits people do have will require a lot of momentum. So the fact that we are stuck in a random local non-optimum is a feature of the argument, not a failure. Same would go against people who were arguing in favor of speaking Esperanto, or re-establishing the rationalist community in Palau (there are some posts about this around). It’s the static friction that matters. I think we are stuck with Qwerty until the Intelligence Explosion. I think we are stuck with some distribution of vegans, vegetarians and causal-assassins until the IE.
2) I’m not out to end veganism and vegetarianism on the grounds that there is a lot of uncertainty that won’t be solved before the IE. I’m just out to end high status people within the EA community trying to make people change in either direction. I’m out to save EA time and attention. I’m out on this topic because I’ve seen countless hours of discussions between really smart productive awesome world-saving people, in Brazil, the UK and the US where time was being dedicated to this, as if it were super clear cut net good, when it in fact isn’t. Not as good as increasing insight, coordination, cooperation, control, safety savvy AI tech, building community, getting the order right, differential progress or fundraising, for instance.
3) Maybe I should not worry. Maybe there are cheaper hours than those dedicated to veganism to put to good use among the high intellectuals. But dietary habits have a 2 thousand year old tradition of being used as shibboleths, as implicit markers that determine friend from foe. And once again, as I told you here: As long as both teams continue in this lifelong quest together, and as long as both shut up and multiply, it doesn’t matter. At the end of the day, we (have reason to) act alike. I just want to make sure that we get as many as possible, as strong as possible, and set the controls for the heart of the sun.
4) Because I’m out against public veganism advocacy within EAs, LWs, CFARs, I’ve advocated in the past for Private Veganism, for outsourcing vegetarianism, and even, on the same post, for pandemizing veganism, like a vaccine in the water supply. I like animals. I just like future animals as much as I like current animals, so if animals are stealing attention away from my FAI friends, like you guys, I’ll make my stand “against” them, for them.
5) Bostrom puts it clearly. I cite Peter Singer (forthcoming) (I can’t cite him here on LW because it’s unpublished, sorry) Ommitted text in which Singer quotes Bostrom. (my emphasis) The point here being that presentations or posts within the EA community do not increase number of EA’s, they only scatter EA time, which is in part—along with the redbull+rockstar drink—why I felt so averse to that seemingly harmless presentation.
6) Yudkowsky puts it ironically: Eliezer Yudkowsky “Okay, so all of those risks should affect 4e20 stars which should beat the present value of all human and animal life on the surface of one planet making inefficient use of around a millionth of the output of one star. I do understand that this perspective may sear away some people’s souls, but in reality we are a tiny little blue speck containing a little tribe of tiny people (and animals), a tiny blue speck from which hangs, downward in Time, a vast heavily-populated world. A world of people who are helpless, who have no voices that can move upward and reach the tiny blue speck, who can only look up desperately up at the tiny blue speck and hope we don’t screw up, because if that tiny blue speck snaps, their whole huge world will drop out of Time into the void of never-existed. The joys and sorrows of the village of tiny people and animals on that tiny blue speck don’t matter very much compared to the sheer terror of dropping the entire heavily-populated civilization that is, somehow, hanging from that tiny blue speck. They cannot speak for themselves, so I try to speak for them.”
That is my case against public advocacy of dietary habits on moral grounds, it is similar, though shorter-sighted, to Paul’s “Against Moral Advocacy” at Rational Altruist. I do not have intentions of pascal mugging, or pascal wagering, and I’m willing and able to change my mind about these topics. But I find the force of these arguments (those here plus the one on top) to be overwhelming.
See also: http://reflectivedisequilibrium.blogspot.com/2013/07/vegan-advocacy-and-pessimism-about-wild.html
Do we know what the relative moral worth of cows and insects is? No. But we can make a best guess based on the available evidence, the same way we do with any other kind of uncertainty. It seems to me like this argument is just “we can’t be certain about anything, therefore we have no basis on which to choose one action over another”, dressed up a little.
You assume that this moral worth objectively exists waiting to be discovered and known.
No I don’t, it can be subjective and the argument still goes through.
If you comply with the VNM axioms then you have an (effective) utility function and so that moral worth is calculable. And if you don’t follow those axioms you get dutch-booked.
How does that work? Alice thinks the moral worth of a cow is high enough not to mistreat or eat it. Bob thinks that the moral worth of a cow is zero and cares only about the quality of his steak. How are you going to reconcile their views (or even estimates)?
I’m not. Each of them has a moral opinion and knows what they believe to be the right action. Their disagreement is an ordinary moral disagreement; there are plenty of other moral questions where there is no consensus.
So in this context what does knowing “the relative moral worth of cows and insects” mean?
The same thing as knowing how delicious a certain food is.
Sure, but then there is no problem in knowing, ever. You said “we don’t know, but we can make an estimate” and with respect to my personal opinion about how delicious a certain food is, I have immediate direct knowledge and no need for estimates.