Are you implying that we shouldn’t maximise expected utility when we’re faced with lots of events with dependent probabilities? This seems like an unusual stance.
I would limit this to cases where the dependency involves trusting an agent’s judgment (or honesty). I am not very good at figuring such a thing out and in cases like this whether I trust the agent has a large impact on the final decision.
the mugger can name arbitrarily high numbers of people that they might torture, whereas you can figure out exactly how many non-human animals suffer and die as a result of your dietary choices
You can name an arbitrary figure for what the likelihood is that animals suffer, said arbitrary figure being tailored to be small yet large enough that multiplying it by the number of animals I eat leads to the conclusion that eating them is bad.
It’s true that in this case you are arbitrarily picking the small figure rather than the large figure as in a typical Pascal’s Mugging, but it still amounts to picking the right figure to get the right answer.
I would limit this to cases where the dependency involves trusting an agent’s judgment (or honesty). I am not very good at figuring such a thing out and in cases like this whether I trust the agent has a large impact on the final decision.
But in this case, advocates for veganism are not being agents in the sense of implementing good/bad outcomes if you choose correctly/incorrectly, or personally gaining from you making one choice or another. Rather, we are just stating an argument and letting you judge how persuasive you think that argument is.
You can name an arbitrary figure for what the likelihood is that animals suffer, said arbitrary figure being tailored to be small yet large enough that multiplying it by the number of animals I eat leads to the conclusion that eating them is bad.
The probability that non-human animals suffer can’t be arbitrarily large (since it’s trivially bounded by 1), and for the purposes of the pro-veganism argument it can’t be arbitrarily small, as explained in my previous comment, making this argument decidedly non-Pascalian. Furthermore, I’m not picking your probability that non-human animals suffer, I’m just claiming that for any reasonable probability assignment, veganism comes out as the right thing to do. If I’m right about this, then I think that the conclusion follows, whether or not you want to call it Pascalian.
But in this case, advocates for veganism are not being agents in the sense of implementing good/bad outcomes if you choose correctly/incorrectly, or personally gaining from you making one choice or another.
Human bias serves the role of personal gain in this case. (Also, the nature of vegetarianism makes it especially prone to such bias.)
The probability that non-human animals suffer can’t be arbitrarily large (since it’s trivially bounded by 1),
It can be arbitrarily chosen in such a way as to always force the conclusion that eating animals is wrong. Being arbitrary enough for this purpose does not require being able to choose values greater than 1.
It can be arbitrarily chosen in such a way as to always force the conclusion that eating animals is wrong. Being arbitrary enough for this purpose does not require being able to choose values greater than 1.
You are talking as if I am setting your probability that non-human animals are wrong. I am not doing that: all that I am saying is that for any reasonable probability assignment, you get the conclusion that you shouldn’t eat non-human animals or their secretions. If this is true, then eating non-human animals or their secretions is wrong.
You are talking as if I am setting your probability that non-human animals are wrong.
You are arbitrarily selecting a number for the probability that animals suffer. This number can be chosen by you such that when multiplied by the number of animals people eat, it always results in the conclusion that the expected damage is enough that people should not eat animals.
This is similar to Pascal’s Mugging, except that you are choosing the smaller number instead of the larger number.
for any reasonable probability assignment, you get the conclusion that you shouldn’t eat non-human animals
This is not true. For instance, a probability assignment of 1/100000000 to the probability that animals suffer like humans would not lead to that conclusion. However, 1/100000000 falls outside the range that most people think of when they think of a small but finite probability, so it sounds unreasonable even though it is not.
I would limit this to cases where the dependency involves trusting an agent’s judgment (or honesty). I am not very good at figuring such a thing out and in cases like this whether I trust the agent has a large impact on the final decision.
You can name an arbitrary figure for what the likelihood is that animals suffer, said arbitrary figure being tailored to be small yet large enough that multiplying it by the number of animals I eat leads to the conclusion that eating them is bad.
It’s true that in this case you are arbitrarily picking the small figure rather than the large figure as in a typical Pascal’s Mugging, but it still amounts to picking the right figure to get the right answer.
But in this case, advocates for veganism are not being agents in the sense of implementing good/bad outcomes if you choose correctly/incorrectly, or personally gaining from you making one choice or another. Rather, we are just stating an argument and letting you judge how persuasive you think that argument is.
The probability that non-human animals suffer can’t be arbitrarily large (since it’s trivially bounded by 1), and for the purposes of the pro-veganism argument it can’t be arbitrarily small, as explained in my previous comment, making this argument decidedly non-Pascalian. Furthermore, I’m not picking your probability that non-human animals suffer, I’m just claiming that for any reasonable probability assignment, veganism comes out as the right thing to do. If I’m right about this, then I think that the conclusion follows, whether or not you want to call it Pascalian.
Human bias serves the role of personal gain in this case. (Also, the nature of vegetarianism makes it especially prone to such bias.)
It can be arbitrarily chosen in such a way as to always force the conclusion that eating animals is wrong. Being arbitrary enough for this purpose does not require being able to choose values greater than 1.
You are talking as if I am setting your probability that non-human animals are wrong. I am not doing that: all that I am saying is that for any reasonable probability assignment, you get the conclusion that you shouldn’t eat non-human animals or their secretions. If this is true, then eating non-human animals or their secretions is wrong.
You are arbitrarily selecting a number for the probability that animals suffer. This number can be chosen by you such that when multiplied by the number of animals people eat, it always results in the conclusion that the expected damage is enough that people should not eat animals.
This is similar to Pascal’s Mugging, except that you are choosing the smaller number instead of the larger number.
This is not true. For instance, a probability assignment of 1/100000000 to the probability that animals suffer like humans would not lead to that conclusion. However, 1/100000000 falls outside the range that most people think of when they think of a small but finite probability, so it sounds unreasonable even though it is not.