Drethelin and Jiro, I was taking for granted—because it is a common opinion among vegans—that cows lives are morally unworthy, and that if insects lives are like anything at all, they are awful.
The reasoning works in all the four cases. Animal 1 has + life and animal 2 + life and they are anti correlated in nature, both are negative and anti-correlated, or when the signals are different 1 is + 2 is—and they are positively correlated.
EDIT: After discussing this here I had a long discussion over email about this with two EA’s, and decided to put forth my final arguments:
I’ll give my best shot. It is also my final shot. If it is not persuasive, I may give up on the task entirely. (because I have a book on altruism and the world has a Superintelligence coming soon, and I feel we are reaching marginal levels of opinion change)
1) My argument is heavily reliant on the idea that any attempt to disrupt the static friction of whatever food habits people do have will require a lot of momentum. So the fact that we are stuck in a random local non-optimum is a feature of the argument, not a failure. Same would go against people who were arguing in favor of speaking Esperanto, or re-establishing the rationalist community in Palau (there are some posts about this around). It’s the static friction that matters. I think we are stuck with Qwerty until the Intelligence Explosion. I think we are stuck with some distribution of vegans, vegetarians and causal-assassins until the IE.
2) I’m not out to end veganism and vegetarianism on the grounds that there is a lot of uncertainty that won’t be solved before the IE. I’m just out to end high status people within the EA community trying to make people change in either direction. I’m out to save EA time and attention. I’m out on this topic because I’ve seen countless hours of discussions between really smart productive awesome world-saving people, in Brazil, the UK and the US where time was being dedicated to this, as if it were super clear cut net good, when it in fact isn’t. Not as good as increasing insight, coordination, cooperation, control, safety savvy AI tech, building community, getting the order right, differential progress or fundraising, for instance.
3) Maybe I should not worry. Maybe there are cheaper hours than those dedicated to veganism to put to good use among the high intellectuals. But dietary habits have a 2 thousand year old tradition of being used as shibboleths, as implicit markers that determine friend from foe. And once again, as I told you here: As long as both teams continue in this lifelong quest together, and as long as both shut up and multiply, it doesn’t matter. At the end of the day, we (have reason to) act alike. I just want to make sure that we get as many as possible, as strong as possible, and set the controls for the heart of the sun.
4) Because I’m out against public veganism advocacy within EAs, LWs, CFARs, I’ve advocated in the past for Private Veganism, for outsourcing vegetarianism, and even, on the same post, for pandemizing veganism, like a vaccine in the water supply. I like animals. I just like future animals as much as I like current animals, so if animals are stealing attention away from my FAI friends, like you guys, I’ll make my stand “against” them, for them.
5) Bostrom puts it clearly. I cite Peter Singer (forthcoming) (I can’t cite him here on LW because it’s unpublished, sorry)
Ommitted text in which Singer quotes Bostrom.
(my emphasis)
The point here being that presentations or posts within the EA community do not increase number of EA’s, they only scatter EA time, which is in part—along with the redbull+rockstar drink—why I felt so averse to that seemingly harmless presentation.
6) Yudkowsky puts it ironically: Eliezer Yudkowsky
“Okay, so all of those risks should affect 4e20 stars which should beat the present value of all human and animal life on the surface of one planet making inefficient use of around a millionth of the output of one star. I do understand that this perspective may sear away some people’s souls, but in reality we are a tiny little blue speck containing a little tribe of tiny people (and animals), a tiny blue speck from which hangs, downward in Time, a vast heavily-populated world. A world of people who are helpless, who have no voices that can move upward and reach the tiny blue speck, who can only look up desperately up at the tiny blue speck and hope we don’t screw up, because if that tiny blue speck snaps, their whole huge world will drop out of Time into the void of never-existed. The joys and sorrows of the village of tiny people and animals on that tiny blue speck don’t matter very much compared to the sheer terror of dropping the entire heavily-populated civilization that is, somehow, hanging from that tiny blue speck.
They cannot speak for themselves, so I try to speak for them.”
That is my case against public advocacy of dietary habits on moral grounds, it is similar, though shorter-sighted, to Paul’s “Against Moral Advocacy” at Rational Altruist.
I do not have intentions of pascal mugging, or pascal wagering, and I’m willing and able to change my mind about these topics. But I find the force of these arguments (those here plus the one on top) to be overwhelming.
Drethelin and Jiro, I was taking for granted—because it is a common opinion among vegans—that cows lives are morally unworthy, and that if insects lives are like anything at all, they are awful.
The reasoning works in all the four cases. Animal 1 has + life and animal 2 + life and they are anti correlated in nature, both are negative and anti-correlated, or when the signals are different 1 is + 2 is—and they are positively correlated.
EDIT: After discussing this here I had a long discussion over email about this with two EA’s, and decided to put forth my final arguments:
I’ll give my best shot. It is also my final shot. If it is not persuasive, I may give up on the task entirely. (because I have a book on altruism and the world has a Superintelligence coming soon, and I feel we are reaching marginal levels of opinion change)
1) My argument is heavily reliant on the idea that any attempt to disrupt the static friction of whatever food habits people do have will require a lot of momentum. So the fact that we are stuck in a random local non-optimum is a feature of the argument, not a failure. Same would go against people who were arguing in favor of speaking Esperanto, or re-establishing the rationalist community in Palau (there are some posts about this around). It’s the static friction that matters. I think we are stuck with Qwerty until the Intelligence Explosion. I think we are stuck with some distribution of vegans, vegetarians and causal-assassins until the IE.
2) I’m not out to end veganism and vegetarianism on the grounds that there is a lot of uncertainty that won’t be solved before the IE. I’m just out to end high status people within the EA community trying to make people change in either direction. I’m out to save EA time and attention. I’m out on this topic because I’ve seen countless hours of discussions between really smart productive awesome world-saving people, in Brazil, the UK and the US where time was being dedicated to this, as if it were super clear cut net good, when it in fact isn’t. Not as good as increasing insight, coordination, cooperation, control, safety savvy AI tech, building community, getting the order right, differential progress or fundraising, for instance.
3) Maybe I should not worry. Maybe there are cheaper hours than those dedicated to veganism to put to good use among the high intellectuals. But dietary habits have a 2 thousand year old tradition of being used as shibboleths, as implicit markers that determine friend from foe. And once again, as I told you here: As long as both teams continue in this lifelong quest together, and as long as both shut up and multiply, it doesn’t matter. At the end of the day, we (have reason to) act alike. I just want to make sure that we get as many as possible, as strong as possible, and set the controls for the heart of the sun.
4) Because I’m out against public veganism advocacy within EAs, LWs, CFARs, I’ve advocated in the past for Private Veganism, for outsourcing vegetarianism, and even, on the same post, for pandemizing veganism, like a vaccine in the water supply. I like animals. I just like future animals as much as I like current animals, so if animals are stealing attention away from my FAI friends, like you guys, I’ll make my stand “against” them, for them.
5) Bostrom puts it clearly. I cite Peter Singer (forthcoming) (I can’t cite him here on LW because it’s unpublished, sorry) Ommitted text in which Singer quotes Bostrom. (my emphasis) The point here being that presentations or posts within the EA community do not increase number of EA’s, they only scatter EA time, which is in part—along with the redbull+rockstar drink—why I felt so averse to that seemingly harmless presentation.
6) Yudkowsky puts it ironically: Eliezer Yudkowsky “Okay, so all of those risks should affect 4e20 stars which should beat the present value of all human and animal life on the surface of one planet making inefficient use of around a millionth of the output of one star. I do understand that this perspective may sear away some people’s souls, but in reality we are a tiny little blue speck containing a little tribe of tiny people (and animals), a tiny blue speck from which hangs, downward in Time, a vast heavily-populated world. A world of people who are helpless, who have no voices that can move upward and reach the tiny blue speck, who can only look up desperately up at the tiny blue speck and hope we don’t screw up, because if that tiny blue speck snaps, their whole huge world will drop out of Time into the void of never-existed. The joys and sorrows of the village of tiny people and animals on that tiny blue speck don’t matter very much compared to the sheer terror of dropping the entire heavily-populated civilization that is, somehow, hanging from that tiny blue speck. They cannot speak for themselves, so I try to speak for them.”
That is my case against public advocacy of dietary habits on moral grounds, it is similar, though shorter-sighted, to Paul’s “Against Moral Advocacy” at Rational Altruist. I do not have intentions of pascal mugging, or pascal wagering, and I’m willing and able to change my mind about these topics. But I find the force of these arguments (those here plus the one on top) to be overwhelming.