I agree, and I think that focus on the welfare of animals while there is so much outstanding human suffering to be tackled is a weird mistake the EA community seems to be making. Even more importantly, the importance of aligning AI and the potential relevance of moral philosophy to that aim seems to vastly overwhelm anything whatsoever happening with the environment. If you want to help the environment or animals, the only plausible way to do so is to help align AI with your values (including your value of the environment and animals). We’re at a super weird crux point where everything channels through that.
I don’t think it’s a mistake to focus on animal suffering over human suffering (if we’re only comparing these two), since it seems likely we can reduce animal suffering more cost-effectively, and possibly much more cost-effectively, depending on your values. See:
“If you want to help the environment or animals, the only plausible way to do so is to help align AI with your values (including your value of the environment and animals). We’re at a super weird crux point where everything channels through that.”
We can still prevent suffering up until AGI arrives, AGI might not come for decades, and even after it comes, if we don’t go extinct (which would very plausibly come with the end of animal suffering!), there can still be popular resistance to helping or not harming animals. You might say influencing AI values is the most cost-effective way to help animals and this is plausible, but not obvious. Some people are looking at moral circle expansion as a way to improve the far future, like Sentience Institute, but mostly for artificial sentience.
I agree, and I think that focus on the welfare of animals while there is so much outstanding human suffering to be tackled is a weird mistake the EA community seems to be making. Even more importantly, the importance of aligning AI and the potential relevance of moral philosophy to that aim seems to vastly overwhelm anything whatsoever happening with the environment. If you want to help the environment or animals, the only plausible way to do so is to help align AI with your values (including your value of the environment and animals). We’re at a super weird crux point where everything channels through that.
I don’t think it’s a mistake to focus on animal suffering over human suffering (if we’re only comparing these two), since it seems likely we can reduce animal suffering more cost-effectively, and possibly much more cost-effectively, depending on your values. See:
https://forum.effectivealtruism.org/posts/ahr8k42ZMTvTmTdwm/how-good-is-the-humane-league-compared-to-the-against
https://forum.effectivealtruism.org/posts/fogJKYXvqzkr9KCud/a-complete-quantitative-model-for-cause-selection#global-poverty-vs-animal-advocacy
https://forum.effectivealtruism.org/posts/nDgCKwjBKwFvcBsts/corporate-campaigns-for-chicken-welfare-are-10-000-times-as
https://forum.effectivealtruism.org/posts/rvvwCcixmEep4RSjg/prioritizing-x-risks-may-require-caring-about-future-people
“If you want to help the environment or animals, the only plausible way to do so is to help align AI with your values (including your value of the environment and animals). We’re at a super weird crux point where everything channels through that.”
We can still prevent suffering up until AGI arrives, AGI might not come for decades, and even after it comes, if we don’t go extinct (which would very plausibly come with the end of animal suffering!), there can still be popular resistance to helping or not harming animals. You might say influencing AI values is the most cost-effective way to help animals and this is plausible, but not obvious. Some people are looking at moral circle expansion as a way to improve the far future, like Sentience Institute, but mostly for artificial sentience.