My personal reason for pursuing vegetarianism (and ultimately veganism) is simple: I want the result of me having existed, as compared to an alternative universe where I did not exist, to be less overall suffering in the world. If I eat meat for my whole life, I’ll already have contributed to the creation of such a vast amount of suffering that it will be very hard to do anything that will reliably catch up with that. Each day of my life, I’ll be racking up more “suffering debt” to pay off, and I’d rather not have my mere existence contribute to adding more suffering.
No. The “fleshed-out version” is rather complex, incomplete, and constantly-changing, as it’s effectively the current compromise that’s been forged between the negative utilitarian, positive utilitarian, deontological, and purely egoist factions within my brain. It has plenty of inconsistencies, but I resolve those on a case-by-case basis as I encounter them. I don’t have a good answer to the doomsday machine, because I currently don’t expect to encounter a situation where my actions would have considerable influence on the creation of a doomsday machine, so I haven’t needed to resolve that particular inconsistency.
Of course, there is the question of x-risk mitigation work and the fact that e.g. my work for MIRI might reduce the risk of a doomsday machine, so I have been forced to somewhat consider the question. My negative utilitarian faction would consider it a good thing if all life on Earth were eradicated, with the other factions strongly disagreeing. The current compromise balance is based around the suspicion that most kinds of x-risk would probably lead to massive suffering in the form of an immense death toll and then a gradual reconstruction that would eventually bring Earth’s population back to its current levels, rather than all life on the planet going extinct. (Even for AI/Singularity scenarios there is great uncertainty and a non-trivial possibility for such an outcome.) All my brain-factions agree on this being a Seriously Bad scenario to happen, so there is currently an agreement that work aimed at reducing the outcome of this scenario is good, even if it indirectly influences the probability of an “everyone dies” scenario in one way or another. The compromise is only possible because we are currently very unsure of what would have a very strong effect on the probability of an “everyone dies” scenario.
I am unsure of what would happen if we had good evidence of it really being possible to strongly increase or decrease the probability of an “everyone dies” scenario: with the current power balances, I expect that we’d just decide not to do anything either way, with the negative utilitarian faction being strong enough to veto attempts to save humanity, but not strong enough to override everyone else’s veto when it came to attempts to destroy humanity. Of course, this assumes that humanity would basically go on experiencing its current levels of suffering after being saved: if saving humanity would also involve a positive Singularity after which it was very sure that nobody would need to experience involuntary suffering anymore, then the power balance would very strongly shift to favor saving humanity.
I want the result of me having existed, as compared to an alternative universe where I did not exist, to be...
This seems like an arbitrary distinction. The value relevant to your ongoing decisions is in opportunity cost of the decisions (and you know that). Why take the popular sentiment seriously, or even merely indulge yourself in it, when it’s known to be wrong?
It is indeed wrong, but it seems to mostly produce the same recommendations as framing the issue in terms of opportunity costs while being more motivating. “Shifting to vegetarianism has a high expected suffering reduction” doesn’t compel action in nearly the same way as “I’m currently racking up a suffering debt every day of my life” does.
I’ll already have contributed to the creation of such a vast amount of suffering that it will be very hard to do anything that will reliably catch up with that.
Actually, it’s pretty easy: just donate enough money to organizations like Vegan Outreach such that you’re confident that you have caused the creation of a new vegetarian/vegan.
My personal reason for pursuing vegetarianism (and ultimately veganism) is simple: I want the result of me having existed, as compared to an alternative universe where I did not exist, to be less overall suffering in the world. If I eat meat for my whole life, I’ll already have contributed to the creation of such a vast amount of suffering that it will be very hard to do anything that will reliably catch up with that. Each day of my life, I’ll be racking up more “suffering debt” to pay off, and I’d rather not have my mere existence contribute to adding more suffering.
That’s probably the abridged version, because if that were the actual goal, a doomsday machine would do the trick.
If you count pleasure as negative suffering…
Yes.
Do you have a fleshed-out version formulated somewhere? *tries to hide iron fireplace poker behind his back*
No. The “fleshed-out version” is rather complex, incomplete, and constantly-changing, as it’s effectively the current compromise that’s been forged between the negative utilitarian, positive utilitarian, deontological, and purely egoist factions within my brain. It has plenty of inconsistencies, but I resolve those on a case-by-case basis as I encounter them. I don’t have a good answer to the doomsday machine, because I currently don’t expect to encounter a situation where my actions would have considerable influence on the creation of a doomsday machine, so I haven’t needed to resolve that particular inconsistency.
Of course, there is the question of x-risk mitigation work and the fact that e.g. my work for MIRI might reduce the risk of a doomsday machine, so I have been forced to somewhat consider the question. My negative utilitarian faction would consider it a good thing if all life on Earth were eradicated, with the other factions strongly disagreeing. The current compromise balance is based around the suspicion that most kinds of x-risk would probably lead to massive suffering in the form of an immense death toll and then a gradual reconstruction that would eventually bring Earth’s population back to its current levels, rather than all life on the planet going extinct. (Even for AI/Singularity scenarios there is great uncertainty and a non-trivial possibility for such an outcome.) All my brain-factions agree on this being a Seriously Bad scenario to happen, so there is currently an agreement that work aimed at reducing the outcome of this scenario is good, even if it indirectly influences the probability of an “everyone dies” scenario in one way or another. The compromise is only possible because we are currently very unsure of what would have a very strong effect on the probability of an “everyone dies” scenario.
I am unsure of what would happen if we had good evidence of it really being possible to strongly increase or decrease the probability of an “everyone dies” scenario: with the current power balances, I expect that we’d just decide not to do anything either way, with the negative utilitarian faction being strong enough to veto attempts to save humanity, but not strong enough to override everyone else’s veto when it came to attempts to destroy humanity. Of course, this assumes that humanity would basically go on experiencing its current levels of suffering after being saved: if saving humanity would also involve a positive Singularity after which it was very sure that nobody would need to experience involuntary suffering anymore, then the power balance would very strongly shift to favor saving humanity.
This seems like an arbitrary distinction. The value relevant to your ongoing decisions is in opportunity cost of the decisions (and you know that). Why take the popular sentiment seriously, or even merely indulge yourself in it, when it’s known to be wrong?
It is indeed wrong, but it seems to mostly produce the same recommendations as framing the issue in terms of opportunity costs while being more motivating. “Shifting to vegetarianism has a high expected suffering reduction” doesn’t compel action in nearly the same way as “I’m currently racking up a suffering debt every day of my life” does.
Actually, it’s pretty easy: just donate enough money to organizations like Vegan Outreach such that you’re confident that you have caused the creation of a new vegetarian/vegan.