Note that there might be other crucial factors in assessing whether ‘more factory farming’ or ‘less factory farming’ is good on net — e.g., the effect on wild animals, including indirect effects like ‘factory farming changes the global climate, which changes various ecosystems around the world, which increases/decreases the population of various species (or changes what their lives are like)’.
It then matters a lot how likely various wild animal species are to be moral patients, whether their lives tend to be ‘worse than death’ vs. ‘better than death’, etc.
And regarding:
The number would be much higher than 60% on strictly utilitarian grounds, but humans aren’t strict utilitarians and it makes sense for people working hard on improving animal lives to develop strong feelings about their own personal relationship to factory farming, or to want to self-signal their commitment in some fashion.
I do think that most of EA’s distinctive moral views are best understood as ‘moves in the direction of utilitarianism’ relative to the typical layperson’s moral intuitions. This is interesting because utilitarianism seems false as a general theory of human value (e.g., I don’t reflectively endorse being perfectly morally impartial between my family and a stranger). But utilitarianism seems to get one important core thing right, which is ‘when the stakes are sufficiently high and there aren’t complicating factors, you should definitely be impartial, consequentialist, scope-sensitive, etc. in your high-impact decisions’; the weird features of EA morality seem to mostly be about emulating impartial benevolent maximization in this specific way, without endorsing utilitarianism as a whole.
Like, an interest in human challenge trials is a very recognizably ‘EA-moral-orientation’ thing to do, even though it’s not a thing EAs have traditionally cared about — and that’s because it’s thinking seriously, quantitatively, and consistently about costs and benefits, it’s consequentialist, it’s impartially trying to improve welfare, etc.
There’s a general, very simple and unified thread running through all of these moral divergences AFAICT, and it’s something like ‘when choices are simultaneously low-effort enough and high-impact enough, and don’t involve severe obvious violations of ordinary interpersonal ethics like “don’t murder”, utilitarianism gets the right answer’. And I think this is because ‘impartially maximize welfare’ is itself a simple idea, and an incredibly crucial part of human morality.
Note that there might be other crucial factors in assessing whether ‘more factory farming’ or ‘less factory farming’ is good on net — e.g., the effect on wild animals, including indirect effects like ‘factory farming changes the global climate, which changes various ecosystems around the world, which increases/decreases the population of various species (or changes what their lives are like)’.
It then matters a lot how likely various wild animal species are to be moral patients, whether their lives tend to be ‘worse than death’ vs. ‘better than death’, etc.
And regarding:
I do think that most of EA’s distinctive moral views are best understood as ‘moves in the direction of utilitarianism’ relative to the typical layperson’s moral intuitions. This is interesting because utilitarianism seems false as a general theory of human value (e.g., I don’t reflectively endorse being perfectly morally impartial between my family and a stranger). But utilitarianism seems to get one important core thing right, which is ‘when the stakes are sufficiently high and there aren’t complicating factors, you should definitely be impartial, consequentialist, scope-sensitive, etc. in your high-impact decisions’; the weird features of EA morality seem to mostly be about emulating impartial benevolent maximization in this specific way, without endorsing utilitarianism as a whole.
Like, an interest in human challenge trials is a very recognizably ‘EA-moral-orientation’ thing to do, even though it’s not a thing EAs have traditionally cared about — and that’s because it’s thinking seriously, quantitatively, and consistently about costs and benefits, it’s consequentialist, it’s impartially trying to improve welfare, etc.
There’s a general, very simple and unified thread running through all of these moral divergences AFAICT, and it’s something like ‘when choices are simultaneously low-effort enough and high-impact enough, and don’t involve severe obvious violations of ordinary interpersonal ethics like “don’t murder”, utilitarianism gets the right answer’. And I think this is because ‘impartially maximize welfare’ is itself a simple idea, and an incredibly crucial part of human morality.