I think one strong argument in favor of eating meat is that beef cattle (esp. grass-fed) might have net positive lives. If this is true, then the utilitarian line is to 1) eat more beef to increase demand, 2) continue advocating for welfare reforms that will make cows’ lives even more positive.
Beef cattle are different than e.g. factory farmed chicken in that they live a long time (around 3 years on average vs 6-7 weeks for broilers), and spend much of their lives grazing on stockers where they might have natural-ish lives.
Another argument in favor of eating beef is that it tends to lead to deforestation, which decreases total wild animal habitat, which one might think are worse than beef farms.
… I love how EA does veganism / animal welfare things. It’s really good.
[… Note that in posting this I’m not intending] to advocate for a specific intervention; it’s more that it makes me happy to see thorough and outside-the-box reasoning from folks who are trying to help others, whether or not they have the same background views as me.
Jonathan Salter: Even if this line of reasoning might technically be correct in a narrow, first order effects type way, my intuition tells me that that sort of behaviour would lessen EAs credibility when pushing animal welfare messages, and that spreading general anti-speciesist norms and values to be more important in the long run. Just my two cents though.
Rob Bensinger: My model of what EA should be shooting for is that it should establish its reputation as
’that group of people that engages in wonkish analyses and debates of moral issues at great length, and then actually acts on the conclusions they reach
″that group of people that does lots of cost-benefit analyses and is willing to consider really counter-intuitive concerns rather than rejecting unusual ideas out of hand
″that group of people that seems to be super concerned about its actual impact and nailing down all the details, rather than being content with good PR or moral signaling’
I think that’s the niche EA would occupy if it were going to have the biggest positive impact in the future. And given how diverse EA is and how many disagreements there already are, the ship may have already sailed on us being able to coordinate and converge on moral interventions without any public discussion of things like wild animal suffering.
This is similar to a respect in which my views have changed about whether EAs and rationalists should become vegan en masse. In the past, I’ve given arguments like [in Inhuman Altruism and Revenge of the Meat People]:
A lot more EAs and rationalists should go vegan, because it really does seem like future generations will view 21st-century factory farming similar to how we view 19th-century slavery today. It would be great to be “ahead of the curve” for once, and to clearly show that we’re not just ‘unusually good on some moral questions’ but actually morally exemplary in all the important ways that we can achieve.
I think this is really standing in for two different arguments:
First, a reputational argument saying ‘veganism is an unusually clear signal that we’re willing to take big, costly steps to do the right thing, and that we’re not just armchair theorists or insular contrarians; so we should put more value on paying that signal in order to convince other people that we’re really serious about this save-the-world, help-others, actually-act-based-on-the-abstract-arguments thing’.
Second, an ideal-advisor-style argument saying ‘meat-eating is probably worse than it seems, because the analytic arguments strongly support veganism but social pressure and social intuitions don’t back up those arguments, so we probably won’t emotionally feel their full moral force’.
One objection I got to the first argument is that it seems like the marginal effort and attention of a lot of EAs could save a lot more lives if it were going to things that can have global effects, rather than small-scale personal effects. The reputational argument weighs against this, but there’s a reputational argument going in the other way (I believe due to Katja Grace [update: Katja Grace wrote an in-depth response at the time, but this particular argument seems to be due to Paul Christiano and Oliver Habryka]):
‘What makes EA’s brand special and distinctive, and puts us in an unusual position to have an outsized impact on the world, is that we’re the group that gets really finicky and wonkish about EV and puts its energy into the things that seem highest-EV for the world. Prioritizing personal dietary choices over other, better uses of our time, and especially doing so for reputational or signaling reasons, seem like it actively goes against that unique aspect of EA, which makes it very questionable as a PR venture in the first place.‘
This still left me feeling, on a gut level, like the ‘history will view us as participants in an atrocity’ argument is a strong one—not as a reputational argument, just as an actual argument (by historical analogy) that there’s something morally wrong with participating in factory farming at all, even if we’re (in other aspects of our life) trying to actively oppose factory farming or even-larger atrocities.
Since then, a few things have made me feel like the latter argument’s force isn’t so strong. First, I’ve updated some on the object level about the probability that different species are conscious and that different species in particular circumstances have net-negative lives (though I still think there’s a high-enough-to-be-worth-massively-worrying-about probability that farmed chicken, beef, etc. are all causing immense amounts of suffering).
Second, I’ve realized that when I’ve done my ‘what would future generations think?’ ideal-advisor test in the past, I’ve actually been doing something weird. I’m taking 21st-century intuitions about which things matter most and are most emotionally salient, and projecting them forward to imagine a society where salience works the same way on the meta level, but the object-level social pressures/dynamics are a bit different. But it seems like that heuristic might have been the wrong one for past generations to use, if they wanted to make proper use of this ideal-advisor heuristic.
Jeremy Bentham’s exemplary forward-thinking moral views, for example, seem like a thing you’d achieve by going ‘imagine future generations that are just super reasonable and analytical about all these things, and view things as atrocities in proportion to what the strongest arguments say’, rather than by drawing analogies to things that present-day people find especially atrocious about the past.
(People who have read Bentham: did Bentham ever use intuition pumps like either of these? If so, did either line of thinking seem like it actually played a role in how he reached his conclusions, as opposed to being arguments for persuading others?)
Imagine instead a future society that’s most horrified, above all else, by failures of reasoning process like ‘foreseeably allocating attention and effort to something other than the thing that looks highest-EV to you’. Imagine a visceral gut-level reaction to systematic decision-making errors (that foreseeably have very negative EV) even more severe than modernity’s most negative gut-level reactions to world events. Those failures of reasoning process, after all, are much more directly in your control (and much more influencable by moral praise and condemnation) than action outcomes. That seems like a hypothetical that pushes in a pretty different direction in a lot of these cases. (And one that converges more with the obvious direct ‘just do the best thing’ argument, which doesn’t need any defense.)
I really like the FB crossposts here, and also really like this specific comment. Might be worth polishing it into a top-level post, either here or on the EA Forum sometime.
Thanks! :) I’m currently not planning to polish it; part of the appeal of cross-posting from Facebook for me is that I can keep it timeboxed by treating it as an artifact of something I already said. I guess someone else could cannibalize it into a prettier stand-alone post.
From an April 2019 Facebook discussion:
Rob Bensinger: avacyn:
… I love how EA does veganism / animal welfare things. It’s really good.
(From the comment section on https://forum.effectivealtruism.org/posts/TyLxMrByKuCmzZx6b/reasons-to-eat-meat)
[… Note that in posting this I’m not intending] to advocate for a specific intervention; it’s more that it makes me happy to see thorough and outside-the-box reasoning from folks who are trying to help others, whether or not they have the same background views as me.
Jonathan Salter: Even if this line of reasoning might technically be correct in a narrow, first order effects type way, my intuition tells me that that sort of behaviour would lessen EAs credibility when pushing animal welfare messages, and that spreading general anti-speciesist norms and values to be more important in the long run. Just my two cents though.
Rob Bensinger: My model of what EA should be shooting for is that it should establish its reputation as
’that group of people that engages in wonkish analyses and debates of moral issues at great length, and then actually acts on the conclusions they reach
″that group of people that does lots of cost-benefit analyses and is willing to consider really counter-intuitive concerns rather than rejecting unusual ideas out of hand
″that group of people that seems to be super concerned about its actual impact and nailing down all the details, rather than being content with good PR or moral signaling’
I think that’s the niche EA would occupy if it were going to have the biggest positive impact in the future. And given how diverse EA is and how many disagreements there already are, the ship may have already sailed on us being able to coordinate and converge on moral interventions without any public discussion of things like wild animal suffering.
This is similar to a respect in which my views have changed about whether EAs and rationalists should become vegan en masse. In the past, I’ve given arguments like [in Inhuman Altruism and Revenge of the Meat People]:
A lot more EAs and rationalists should go vegan, because it really does seem like future generations will view 21st-century factory farming similar to how we view 19th-century slavery today. It would be great to be “ahead of the curve” for once, and to clearly show that we’re not just ‘unusually good on some moral questions’ but actually morally exemplary in all the important ways that we can achieve.
I think this is really standing in for two different arguments:
First, a reputational argument saying ‘veganism is an unusually clear signal that we’re willing to take big, costly steps to do the right thing, and that we’re not just armchair theorists or insular contrarians; so we should put more value on paying that signal in order to convince other people that we’re really serious about this save-the-world, help-others, actually-act-based-on-the-abstract-arguments thing’.
Second, an ideal-advisor-style argument saying ‘meat-eating is probably worse than it seems, because the analytic arguments strongly support veganism but social pressure and social intuitions don’t back up those arguments, so we probably won’t emotionally feel their full moral force’.
One objection I got to the first argument is that it seems like the marginal effort and attention of a lot of EAs could save a lot more lives if it were going to things that can have global effects, rather than small-scale personal effects. The reputational argument weighs against this, but there’s a reputational argument going in the other way (I believe due to Katja Grace [update: Katja Grace wrote an in-depth response at the time, but this particular argument seems to be due to Paul Christiano and Oliver Habryka]):
‘What makes EA’s brand special and distinctive, and puts us in an unusual position to have an outsized impact on the world, is that we’re the group that gets really finicky and wonkish about EV and puts its energy into the things that seem highest-EV for the world. Prioritizing personal dietary choices over other, better uses of our time, and especially doing so for reputational or signaling reasons, seem like it actively goes against that unique aspect of EA, which makes it very questionable as a PR venture in the first place.‘
This still left me feeling, on a gut level, like the ‘history will view us as participants in an atrocity’ argument is a strong one—not as a reputational argument, just as an actual argument (by historical analogy) that there’s something morally wrong with participating in factory farming at all, even if we’re (in other aspects of our life) trying to actively oppose factory farming or even-larger atrocities.
Since then, a few things have made me feel like the latter argument’s force isn’t so strong. First, I’ve updated some on the object level about the probability that different species are conscious and that different species in particular circumstances have net-negative lives (though I still think there’s a high-enough-to-be-worth-massively-worrying-about probability that farmed chicken, beef, etc. are all causing immense amounts of suffering).
Second, I’ve realized that when I’ve done my ‘what would future generations think?’ ideal-advisor test in the past, I’ve actually been doing something weird. I’m taking 21st-century intuitions about which things matter most and are most emotionally salient, and projecting them forward to imagine a society where salience works the same way on the meta level, but the object-level social pressures/dynamics are a bit different. But it seems like that heuristic might have been the wrong one for past generations to use, if they wanted to make proper use of this ideal-advisor heuristic.
Jeremy Bentham’s exemplary forward-thinking moral views, for example, seem like a thing you’d achieve by going ‘imagine future generations that are just super reasonable and analytical about all these things, and view things as atrocities in proportion to what the strongest arguments say’, rather than by drawing analogies to things that present-day people find especially atrocious about the past.
(People who have read Bentham: did Bentham ever use intuition pumps like either of these? If so, did either line of thinking seem like it actually played a role in how he reached his conclusions, as opposed to being arguments for persuading others?)
Imagine instead a future society that’s most horrified, above all else, by failures of reasoning process like ‘foreseeably allocating attention and effort to something other than the thing that looks highest-EV to you’. Imagine a visceral gut-level reaction to systematic decision-making errors (that foreseeably have very negative EV) even more severe than modernity’s most negative gut-level reactions to world events. Those failures of reasoning process, after all, are much more directly in your control (and much more influencable by moral praise and condemnation) than action outcomes. That seems like a hypothetical that pushes in a pretty different direction in a lot of these cases. (And one that converges more with the obvious direct ‘just do the best thing’ argument, which doesn’t need any defense.)
I really like the FB crossposts here, and also really like this specific comment. Might be worth polishing it into a top-level post, either here or on the EA Forum sometime.
Thanks! :) I’m currently not planning to polish it; part of the appeal of cross-posting from Facebook for me is that I can keep it timeboxed by treating it as an artifact of something I already said. I guess someone else could cannibalize it into a prettier stand-alone post.