I wouldn’t say the last step that you attribute to me. Firstly, if I were going to talk about the long run, I would say that in the long run, you should maximise expected utility because you’ll probably get a lot of utility that way. That being said, I don’t want to talk about the long run at all, because we don’t make decisions for the long run. For instance, you could decide to have a bacon omelette for dinner today and then stay veg*n for the rest of your life, and the argument that you attribute to me wouldn’t work in that case, although I would urge you to not eat the bacon omelette. (In addition, the line of reasoning that I would actually want you to use would involve attributing >50% probability of cow, chicken, pig, sheep, and fish sentience, but that’s beside the point).
Rather, I would make a case like this: when you make a choice under uncertainty, you have a whole bunch of possible outcomes that could happen after the choice is made. Some of these outcomes will be better when you choose one option, and some will be better when you choose another. So, we have to weigh up which outcomes we care about to decide which choice is better. I claim that you should weigh each outcome in proportion to your probability of it occurring, and the difference in utility that the choice makes. Therefore, even if you only assign the “cows are sentient” or “theory X is true” outcomes a probability of 20%, the bad outcomes are so bad that we shouldn’t risk them. The fact that you assign probability >50% to no damage happening isn’t a suffcient condition to establish “taking the risk is OK”.
That being said, I don’t want to talk about the long run at all, because we don’t make decisions for the long run. For instance, you could decide to have a bacon omelette for dinner today and then stay veg*n for the rest of your life, and the argument that you attribute to me wouldn’t work in that case, although I would urge you to not eat the bacon omelette.
The point is that given the way these probabilities add up, not only wouldn’t that work for a single bacon omelette, it wouldn’t work for a lifetime of bacon omelettes. They’re either all harmful or all non-harmful.
Therefore, even if you only assign the “cows are sentient” or “theory X is true” outcomes a probability of 20%, the bad outcomes are so bad that we shouldn’t risk them.
Your reasoning doesn’t depend on the exact number 20. It just says that the utility of the outcome should be multiplied by its probability. If the probability was 1% or 0.01% you could say exactly the same thing and it would be just as valid. In other words, your reasoning proves too much; it would imply accepting Pascal’s Mugging. And I don’t accept Pascal’s Mugging.
The point is that given the way these probabilities add up, not only wouldn’t that work for a single bacon omelette, it wouldn’t work for a lifetime of bacon omelettes. They’re either all harmful or all non-harmful.
I know. Are you implying that we shouldn’t maximise expected utility when we’re faced with lots of events with dependent probabilities? This seems like an unusual stance.
Your reasoning doesn’t depend on the exact number 20… If the probability was 1% or 0.01% you could say exactly the same thing and it would be just as valid.
My reasoning doesn’t depend on the exact number 20, but the probability can’t be arbitrarily low either. If the probability of cow sentience were only 1⁄1,000,000,000,000, then the expected utility of being veg*n would be lower than that of eating meat, since you would have to learn new recipes and worry about nutrition, and that would be costly enough to outweigh the very small chance of a very bad outcome.
In other words, your reasoning proves too much; it would imply accepting Pascal’s Mugging. And I don’t accept Pascal’s Mugging.
Again, this depends on what you mean by Pascal’s Mugging. If you mean the original version, then my reasoning does not necessarily imply being mugged, since the mugger can name arbitrarily high numbers of people that they might torture, whereas you can figure out exactly how many non-human animals suffer and die as a result of your dietary choices (if you’re an average American, approximately 200, only 30 if you don’t eat seafood, and only 1.4 if you also don’t eat chicken or eggs, according to this document), and nobody can boost this number in response to you claiming that you have a really small probability of them being sentient.
However, if by Pascal’s Mugging you mean “maximising expected utility when the probability of success is small but bounded from below and you have different sources of uncertainty”, then yes, you should accept Pascal’s Mugging, and I have never seen a convincing argument that you shouldn’t. Also, please don’t call that Pascal’s Mugging, since it is importantly different from its namesake.
Are you implying that we shouldn’t maximise expected utility when we’re faced with lots of events with dependent probabilities? This seems like an unusual stance.
I would limit this to cases where the dependency involves trusting an agent’s judgment (or honesty). I am not very good at figuring such a thing out and in cases like this whether I trust the agent has a large impact on the final decision.
the mugger can name arbitrarily high numbers of people that they might torture, whereas you can figure out exactly how many non-human animals suffer and die as a result of your dietary choices
You can name an arbitrary figure for what the likelihood is that animals suffer, said arbitrary figure being tailored to be small yet large enough that multiplying it by the number of animals I eat leads to the conclusion that eating them is bad.
It’s true that in this case you are arbitrarily picking the small figure rather than the large figure as in a typical Pascal’s Mugging, but it still amounts to picking the right figure to get the right answer.
I would limit this to cases where the dependency involves trusting an agent’s judgment (or honesty). I am not very good at figuring such a thing out and in cases like this whether I trust the agent has a large impact on the final decision.
But in this case, advocates for veganism are not being agents in the sense of implementing good/bad outcomes if you choose correctly/incorrectly, or personally gaining from you making one choice or another. Rather, we are just stating an argument and letting you judge how persuasive you think that argument is.
You can name an arbitrary figure for what the likelihood is that animals suffer, said arbitrary figure being tailored to be small yet large enough that multiplying it by the number of animals I eat leads to the conclusion that eating them is bad.
The probability that non-human animals suffer can’t be arbitrarily large (since it’s trivially bounded by 1), and for the purposes of the pro-veganism argument it can’t be arbitrarily small, as explained in my previous comment, making this argument decidedly non-Pascalian. Furthermore, I’m not picking your probability that non-human animals suffer, I’m just claiming that for any reasonable probability assignment, veganism comes out as the right thing to do. If I’m right about this, then I think that the conclusion follows, whether or not you want to call it Pascalian.
But in this case, advocates for veganism are not being agents in the sense of implementing good/bad outcomes if you choose correctly/incorrectly, or personally gaining from you making one choice or another.
Human bias serves the role of personal gain in this case. (Also, the nature of vegetarianism makes it especially prone to such bias.)
The probability that non-human animals suffer can’t be arbitrarily large (since it’s trivially bounded by 1),
It can be arbitrarily chosen in such a way as to always force the conclusion that eating animals is wrong. Being arbitrary enough for this purpose does not require being able to choose values greater than 1.
It can be arbitrarily chosen in such a way as to always force the conclusion that eating animals is wrong. Being arbitrary enough for this purpose does not require being able to choose values greater than 1.
You are talking as if I am setting your probability that non-human animals are wrong. I am not doing that: all that I am saying is that for any reasonable probability assignment, you get the conclusion that you shouldn’t eat non-human animals or their secretions. If this is true, then eating non-human animals or their secretions is wrong.
You are talking as if I am setting your probability that non-human animals are wrong.
You are arbitrarily selecting a number for the probability that animals suffer. This number can be chosen by you such that when multiplied by the number of animals people eat, it always results in the conclusion that the expected damage is enough that people should not eat animals.
This is similar to Pascal’s Mugging, except that you are choosing the smaller number instead of the larger number.
for any reasonable probability assignment, you get the conclusion that you shouldn’t eat non-human animals
This is not true. For instance, a probability assignment of 1/100000000 to the probability that animals suffer like humans would not lead to that conclusion. However, 1/100000000 falls outside the range that most people think of when they think of a small but finite probability, so it sounds unreasonable even though it is not.
I wouldn’t say the last step that you attribute to me. Firstly, if I were going to talk about the long run, I would say that in the long run, you should maximise expected utility because you’ll probably get a lot of utility that way. That being said, I don’t want to talk about the long run at all, because we don’t make decisions for the long run. For instance, you could decide to have a bacon omelette for dinner today and then stay veg*n for the rest of your life, and the argument that you attribute to me wouldn’t work in that case, although I would urge you to not eat the bacon omelette. (In addition, the line of reasoning that I would actually want you to use would involve attributing >50% probability of cow, chicken, pig, sheep, and fish sentience, but that’s beside the point).
Rather, I would make a case like this: when you make a choice under uncertainty, you have a whole bunch of possible outcomes that could happen after the choice is made. Some of these outcomes will be better when you choose one option, and some will be better when you choose another. So, we have to weigh up which outcomes we care about to decide which choice is better. I claim that you should weigh each outcome in proportion to your probability of it occurring, and the difference in utility that the choice makes. Therefore, even if you only assign the “cows are sentient” or “theory X is true” outcomes a probability of 20%, the bad outcomes are so bad that we shouldn’t risk them. The fact that you assign probability >50% to no damage happening isn’t a suffcient condition to establish “taking the risk is OK”.
The point is that given the way these probabilities add up, not only wouldn’t that work for a single bacon omelette, it wouldn’t work for a lifetime of bacon omelettes. They’re either all harmful or all non-harmful.
Your reasoning doesn’t depend on the exact number 20. It just says that the utility of the outcome should be multiplied by its probability. If the probability was 1% or 0.01% you could say exactly the same thing and it would be just as valid. In other words, your reasoning proves too much; it would imply accepting Pascal’s Mugging. And I don’t accept Pascal’s Mugging.
I know. Are you implying that we shouldn’t maximise expected utility when we’re faced with lots of events with dependent probabilities? This seems like an unusual stance.
My reasoning doesn’t depend on the exact number 20, but the probability can’t be arbitrarily low either. If the probability of cow sentience were only 1⁄1,000,000,000,000, then the expected utility of being veg*n would be lower than that of eating meat, since you would have to learn new recipes and worry about nutrition, and that would be costly enough to outweigh the very small chance of a very bad outcome.
Again, this depends on what you mean by Pascal’s Mugging. If you mean the original version, then my reasoning does not necessarily imply being mugged, since the mugger can name arbitrarily high numbers of people that they might torture, whereas you can figure out exactly how many non-human animals suffer and die as a result of your dietary choices (if you’re an average American, approximately 200, only 30 if you don’t eat seafood, and only 1.4 if you also don’t eat chicken or eggs, according to this document), and nobody can boost this number in response to you claiming that you have a really small probability of them being sentient.
However, if by Pascal’s Mugging you mean “maximising expected utility when the probability of success is small but bounded from below and you have different sources of uncertainty”, then yes, you should accept Pascal’s Mugging, and I have never seen a convincing argument that you shouldn’t. Also, please don’t call that Pascal’s Mugging, since it is importantly different from its namesake.
I would limit this to cases where the dependency involves trusting an agent’s judgment (or honesty). I am not very good at figuring such a thing out and in cases like this whether I trust the agent has a large impact on the final decision.
You can name an arbitrary figure for what the likelihood is that animals suffer, said arbitrary figure being tailored to be small yet large enough that multiplying it by the number of animals I eat leads to the conclusion that eating them is bad.
It’s true that in this case you are arbitrarily picking the small figure rather than the large figure as in a typical Pascal’s Mugging, but it still amounts to picking the right figure to get the right answer.
But in this case, advocates for veganism are not being agents in the sense of implementing good/bad outcomes if you choose correctly/incorrectly, or personally gaining from you making one choice or another. Rather, we are just stating an argument and letting you judge how persuasive you think that argument is.
The probability that non-human animals suffer can’t be arbitrarily large (since it’s trivially bounded by 1), and for the purposes of the pro-veganism argument it can’t be arbitrarily small, as explained in my previous comment, making this argument decidedly non-Pascalian. Furthermore, I’m not picking your probability that non-human animals suffer, I’m just claiming that for any reasonable probability assignment, veganism comes out as the right thing to do. If I’m right about this, then I think that the conclusion follows, whether or not you want to call it Pascalian.
Human bias serves the role of personal gain in this case. (Also, the nature of vegetarianism makes it especially prone to such bias.)
It can be arbitrarily chosen in such a way as to always force the conclusion that eating animals is wrong. Being arbitrary enough for this purpose does not require being able to choose values greater than 1.
You are talking as if I am setting your probability that non-human animals are wrong. I am not doing that: all that I am saying is that for any reasonable probability assignment, you get the conclusion that you shouldn’t eat non-human animals or their secretions. If this is true, then eating non-human animals or their secretions is wrong.
You are arbitrarily selecting a number for the probability that animals suffer. This number can be chosen by you such that when multiplied by the number of animals people eat, it always results in the conclusion that the expected damage is enough that people should not eat animals.
This is similar to Pascal’s Mugging, except that you are choosing the smaller number instead of the larger number.
This is not true. For instance, a probability assignment of 1/100000000 to the probability that animals suffer like humans would not lead to that conclusion. However, 1/100000000 falls outside the range that most people think of when they think of a small but finite probability, so it sounds unreasonable even though it is not.