“or even whether I should spend much time making that decision, because that might trade off to some extent with time and money I could put towards longtermist efforts (which seem more choice-worthy according to other moral theories I have some credence in).”
What longtermist efforts might there be according to the theory that (if you were certain of) you’d choose to be vegan?
Not sure I understand this question. I’ll basically expand on/explain what I said, and hope that answers your question somewhere. (Disclaimer: This is fairly unpolished, and I’m trying more to provide you an accurate model of my current personal thinking than provide you something that appears wise and defensible.)
I currently have fairly high credence in the longtermism, which I’d roughly phrase as the view that, even in expectation and given difficulties with distant predictions, most of the morally relevant consequences of our actions lie in the far future (meaning something like “anywhere from 100 years from now till the heat death of the universe”). In addition to that fairly high credence, it seems to me intuitively that the “stakes are much higher” on longtermism than on non-longtermist theories (e.g., a person-affecting view that only cares about people alive today). (I’m not sure if I can really formally make sense of that intuition, because maybe those theories should be seen as incomparable and I should use variance voting to give them equal say, but at least for now that’s my tentative impression.)
I also have probably somewhere between 10-90% credence that at least many nonhuman animals are morally conscious in a relevant sense, with non-negligible moral weights. And this theory again seems to suggest much higher stakes than a human-only view would (there are a bunch of reasons one might object to this, and they do lower my confidence, but it still seems like it makes more sense to say that animal-inclusive views say there’s all the potential value/disvalue the human-only view said there was, plus a whole bunch more). I haven’t bothered pinning down my credence much here, because multiplying 0.1 by the amount of suffering caused by an individual’s contributions to factory farming if that view is correct seems already enough to justify vegetarianism, making my precise credence less decision-relevant. (I may use a similar example in my post on value of information, now I think about it.)
As MacAskill notes in that paper, while vegetarianism is typically cheaper than meat-eating, strict vegetarianism will be almost certainly at least slightly more expensive (or inconvenient/time-consuming) that “vegetarian except when it’s expensive/difficult”. A similar thing would likely apply more strongly to veganism. So the more I more towards strict veganism, the more time and money it costs me. It’s not much, and a very fair argument could be made that I probably spend more on other things, like concert tickets or writing these comments. But it still does trade off somewhat against my longtermist efforts (currently centring on donating to the EA Long Term Future Fund and gobbling up knowledge and skills so I can do useful direct work, but I’ll also be starting a relevant job soon).
To me, it seems that the stakes under “longtermism plus animals matter” seem higher than just under “animals matter”. Additionally, I have fairly high credence in longtermism, and no reason to believing conditioning on animals mattering makes longtermism less likely (so even if I accept “animals matter”, I’d have basically exactly the same fairly high credence in longtermism as before).
It therefore seems that a heuristic MEC type of thinking should make me lean towards what longtermism says I should do, though with “side constraints” or “low hanging fruits being plucked” from a “animals matter” perspective. This seems extra robust because, even if “animals matter”, I still expect longtermism is fairly likely, and then a lot of the policies that seem wise from a longtermist angle (getting us to existential safety, expanding our moral circle, raising our wisdom and ability to prevent suffering and increase joy) seem fairly wise from an animals matter angle too (because they’d help us help nonhumans later). (But I haven’t really tried to spell that last assumption out to check it makes sense.)
Not sure I understand this question. I’ll basically expand on/explain what I said, and hope that answers your question somewhere. (Disclaimer: This is fairly unpolished, and I’m trying more to provide you an accurate model of my current personal thinking than provide you something that appears wise and defensible.)
I currently have fairly high credence in the longtermism, which I’d roughly phrase as the view that, even in expectation and given difficulties with distant predictions, most of the morally relevant consequences of our actions lie in the far future (meaning something like “anywhere from 100 years from now till the heat death of the universe”). In addition to that fairly high credence, it seems to me intuitively that the “stakes are much higher” on longtermism than on non-longtermist theories (e.g., a person-affecting view that only cares about people alive today). (I’m not sure if I can really formally make sense of that intuition, because maybe those theories should be seen as incomparable and I should use variance voting to give them equal say, but at least for now that’s my tentative impression.)
I also have probably somewhere between 10-90% credence that at least many nonhuman animals are morally conscious in a relevant sense, with non-negligible moral weights. And this theory again seems to suggest much higher stakes than a human-only view would (there are a bunch of reasons one might object to this, and they do lower my confidence, but it still seems like it makes more sense to say that animal-inclusive views say there’s all the potential value/disvalue the human-only view said there was, plus a whole bunch more). I haven’t bothered pinning down my credence much here, because multiplying 0.1 by the amount of suffering caused by an individual’s contributions to factory farming if that view is correct seems already enough to justify vegetarianism, making my precise credence less decision-relevant. (I may use a similar example in my post on value of information, now I think about it.)
As MacAskill notes in that paper, while vegetarianism is typically cheaper than meat-eating, strict vegetarianism will be almost certainly at least slightly more expensive (or inconvenient/time-consuming) that “vegetarian except when it’s expensive/difficult”. A similar thing would likely apply more strongly to veganism. So the more I more towards strict veganism, the more time and money it costs me. It’s not much, and a very fair argument could be made that I probably spend more on other things, like concert tickets or writing these comments. But it still does trade off somewhat against my longtermist efforts (currently centring on donating to the EA Long Term Future Fund and gobbling up knowledge and skills so I can do useful direct work, but I’ll also be starting a relevant job soon).
To me, it seems that the stakes under “longtermism plus animals matter” seem higher than just under “animals matter”. Additionally, I have fairly high credence in longtermism, and no reason to believing conditioning on animals mattering makes longtermism less likely (so even if I accept “animals matter”, I’d have basically exactly the same fairly high credence in longtermism as before).
It therefore seems that a heuristic MEC type of thinking should make me lean towards what longtermism says I should do, though with “side constraints” or “low hanging fruits being plucked” from a “animals matter” perspective. This seems extra robust because, even if “animals matter”, I still expect longtermism is fairly likely, and then a lot of the policies that seem wise from a longtermist angle (getting us to existential safety, expanding our moral circle, raising our wisdom and ability to prevent suffering and increase joy) seem fairly wise from an animals matter angle too (because they’d help us help nonhumans later). (But I haven’t really tried to spell that last assumption out to check it makes sense.)