Aside from painting “LessWrong types” in really broad, unflattering strokes, I thought the author made several good points. Note though that I am a ~15 year vegetarian (and sometime vegan) myself and I definitely identify with his argument, so there’s the opportunity for subjective validation to creep in. I also find many perference-utlitarian viewpoints persuasive, though I wouldn’t yet identify as one.
I think the 20% thing and the 1-in-20 thing were just hypothetical, so we shouldn’t get too hung up on them; I think his case is just as strong without any numbers. There is some uncertainty about the continuum of animal cognition and how it relates to their capacity to suffer.
My own personal voice-inside-my-head reasons for vegetarianism can be summarized as follows: “I am an animal, but a unique kind of animal who can understand what it means to feel pain and to die and who doesn’t want that to happen to himself or to any other animals. My unique kind of animal can also live a happy, healthy life at very little personal expense without causing other animals to feel pain or to die.” Thus, Rob’s first 4 premises (particularly 2 and 3) resonated with me.
I don’t believe other animals, even other mammals, have anything like human consciousness. Nor do I believe they should be accorded human rights. But I know that at the end of the day, biologically I am a mammal; if you’re warm-blooded and you’ve got hair and a neocortex, then I’m really going to avoid hurting/killing you. If you have a spine and a pulse, I’m giving you the benefit of the doubt.
There is some uncertainty about the continuum of animal cognition and how it relates to their capacity to suffer.
Having a small uncertainty about animal suffering and then saying that because of the large number of animals we eat, even a small uncertainty is enough to make eating animals bad, is a variation on Pascal’s Mugging.
I can understand why you shouldn’t incentivise someone to possibly torture lots of people by being the sort of person who gives in to Pascal’s mugging (in the original formulation). That being said, here you seem to be using Pascal’s mugging to refer to doing anything with high expected utility but low probability of success. Why is that irrational?
Actually, I’m using it to refer to something which has high expected utility, low probability of success, and a third criterion: you are uncertain about what the probability really is. A sweepstakes with 100 tickets has a 1% chance of winning. A sweepstakes which has 2 tickets but where you think there’s a 98% chance that the person running the sweepstakes is a fraudster also has a 1% chance of winning, but that seems fundamentally different from the first case.
you are uncertain about what the probability really is
I think this is a misunderstanding of the idea of probability. The real world is either one way or another, either we will actually win the sweepstakes or we won’t. Probability comes into the picture in our heads, telling us how likely we think a certain outcome is, and how much we weight it when making decisions. As such, I don’t think it makes sense to talk about having uncertainty about what a probability really is, except for the case of a lack of introspection.
Also, going back to Robby’s post:
We don’t know enough about how cattle cognize, and about what kinds of cognition make things moral patients, to assign a less-than-1-in-20 subjective probability to ‘factory-farmed cattle undergo large quantities of something-morally-equivalent-to-suffering’.
This seems like an important difference to what you’re talking about. In this case, the probabilities are bounded below by a not-ridiculously-small number, that (Robby claims) is high enough that we should not eat meat. If you grant that your probability does in fact obey such a bound, and that that bound suffices for the case for veg*nism, then I think the result follows, whether or not you call it a Pascal’s mugging.
If you don’t like the phrase “uncertainty about the probability”, think of it as a probability that is made up of particular kinds of multiple components.
The second sweepstakes example has two components, uncertainty about which entry will be picked and uncertainty about whether the manager is honest. The first one only has uncertainty about which entry will be picked. You could split up the first example mathematically (uncertainty about whether your ticket falls in the last two entries and uncertainty about which of the last two entries your ticket is) but the two parts you get are conceptually much closer than in the second example.
. In this case, the probabilities are bounded below by a not-ridiculously-small number, that (Robby claims) is high enough that we should not eat meat.
Like the possibility that the sweepstakes manager is dishonest, “we don’t know enough about how cattle cognize” is all or nothing; if you do mulitple trials, the distribution is a lot more lumpy. If all cows had exactly 20% of the capacity of humans, then five cows would have 100% in total. If there’s a 20% chance that cows have as much as humans and an 80% chance that they have nothing at all, that’s still a 20% chance, but five cows would have a lumpy distribution—instead of five cows having a guaranteed 100%, there would be a 20% chance of having 500% and an 80% chance of nothing.
In some sense, each case has a probability bounded by 20% for a single cow. But in the first case, there’s no chance of 0%, and in the second case, not only is there a chance of 0%, but the chance of 0% doesn’t decrease as you add more cows. The implications of “the probability is bounded by 20%” that you probably want to draw do not follow in the latter case.
the two parts you get are conceptually much closer than in the second example.
I still don’t see why this matters? To put things concretely, if I would be willing to buy the ticket in the first sweepstakes, why wouldn’t I be willing to do so in the second? Sure, the uncertainty comes from different sources, but what does this matter for me and how much money I make?
The implications of “the probability is bounded by 20%” that you probably want to draw do not follow in the latter case.
If I understand you correctly, you seem to be drawing a slightly distinction here than I thought you were, claiming that the distinction is between 100% probability of a cow consciousness that is 20% as intense as human consciousness, as opposed to a 20% probability of a cow consciousness that is 100% as intense as human consciousness (for some definition of intensity). Am I understanding you correctly?
In any case, I still think that the implications that I want to draw do in fact follow. In the latter case, I would think that eating meat has a 20% chance of producing a really horrible effect, and an 80% chance of being mildly convenient for you, so you definitely shouldn’t eat meat. Is there something that I am missing?
ETA: Again, to put things more concretely, consider theory X: that whenever 50 loaves of bread are bought, someone creates a human, keeps them in horrible conditions, and then kills them. Your probability for theory X being true is 20%. If you remove bread from your diet, you will have to learn a whole bunch of new recipes, and your diet might be slightly low in carbohydrates. Do you think that it is OK to continue eating bread? If not, your disagreement with the case for veg*nism is a different assessment of the facts, rather than a condemnation of the sort of probabilistic reasoning that is used.
I imagine the line of reasoning you want me to use to be something like this:
“Well, the probability of cow sentience is bounded by 20%, so you shouldn’t eat cows.”
“How do you get to that conclusion? After all, it’s not certain. In fact, it’s less certain than not. The most probable result, at 80%, is that no damage is done to cows whatsoever.”
“Well, you should calculate the expectation. 20% large effect + 80% no effect is still enough of a bad effect to care about.”
“But I’m never going to get that expectation. I’m either going to get the full effect or nothing at all.”
“If you eat meat many times, the damage done will add up. Although you could be lucky if you only do it once and cause no damage, if you do it many times you’re almost certain to cause damage. And the average amount of damage done will be equal to that expectation multiplied by the number of trials.”
If there’s a component of uncertainty over the probability, that last step doesn’t really work, since many trials are still all or nothing when combined.
I wouldn’t say the last step that you attribute to me. Firstly, if I were going to talk about the long run, I would say that in the long run, you should maximise expected utility because you’ll probably get a lot of utility that way. That being said, I don’t want to talk about the long run at all, because we don’t make decisions for the long run. For instance, you could decide to have a bacon omelette for dinner today and then stay veg*n for the rest of your life, and the argument that you attribute to me wouldn’t work in that case, although I would urge you to not eat the bacon omelette. (In addition, the line of reasoning that I would actually want you to use would involve attributing >50% probability of cow, chicken, pig, sheep, and fish sentience, but that’s beside the point).
Rather, I would make a case like this: when you make a choice under uncertainty, you have a whole bunch of possible outcomes that could happen after the choice is made. Some of these outcomes will be better when you choose one option, and some will be better when you choose another. So, we have to weigh up which outcomes we care about to decide which choice is better. I claim that you should weigh each outcome in proportion to your probability of it occurring, and the difference in utility that the choice makes. Therefore, even if you only assign the “cows are sentient” or “theory X is true” outcomes a probability of 20%, the bad outcomes are so bad that we shouldn’t risk them. The fact that you assign probability >50% to no damage happening isn’t a suffcient condition to establish “taking the risk is OK”.
That being said, I don’t want to talk about the long run at all, because we don’t make decisions for the long run. For instance, you could decide to have a bacon omelette for dinner today and then stay veg*n for the rest of your life, and the argument that you attribute to me wouldn’t work in that case, although I would urge you to not eat the bacon omelette.
The point is that given the way these probabilities add up, not only wouldn’t that work for a single bacon omelette, it wouldn’t work for a lifetime of bacon omelettes. They’re either all harmful or all non-harmful.
Therefore, even if you only assign the “cows are sentient” or “theory X is true” outcomes a probability of 20%, the bad outcomes are so bad that we shouldn’t risk them.
Your reasoning doesn’t depend on the exact number 20. It just says that the utility of the outcome should be multiplied by its probability. If the probability was 1% or 0.01% you could say exactly the same thing and it would be just as valid. In other words, your reasoning proves too much; it would imply accepting Pascal’s Mugging. And I don’t accept Pascal’s Mugging.
The point is that given the way these probabilities add up, not only wouldn’t that work for a single bacon omelette, it wouldn’t work for a lifetime of bacon omelettes. They’re either all harmful or all non-harmful.
I know. Are you implying that we shouldn’t maximise expected utility when we’re faced with lots of events with dependent probabilities? This seems like an unusual stance.
Your reasoning doesn’t depend on the exact number 20… If the probability was 1% or 0.01% you could say exactly the same thing and it would be just as valid.
My reasoning doesn’t depend on the exact number 20, but the probability can’t be arbitrarily low either. If the probability of cow sentience were only 1⁄1,000,000,000,000, then the expected utility of being veg*n would be lower than that of eating meat, since you would have to learn new recipes and worry about nutrition, and that would be costly enough to outweigh the very small chance of a very bad outcome.
In other words, your reasoning proves too much; it would imply accepting Pascal’s Mugging. And I don’t accept Pascal’s Mugging.
Again, this depends on what you mean by Pascal’s Mugging. If you mean the original version, then my reasoning does not necessarily imply being mugged, since the mugger can name arbitrarily high numbers of people that they might torture, whereas you can figure out exactly how many non-human animals suffer and die as a result of your dietary choices (if you’re an average American, approximately 200, only 30 if you don’t eat seafood, and only 1.4 if you also don’t eat chicken or eggs, according to this document), and nobody can boost this number in response to you claiming that you have a really small probability of them being sentient.
However, if by Pascal’s Mugging you mean “maximising expected utility when the probability of success is small but bounded from below and you have different sources of uncertainty”, then yes, you should accept Pascal’s Mugging, and I have never seen a convincing argument that you shouldn’t. Also, please don’t call that Pascal’s Mugging, since it is importantly different from its namesake.
Are you implying that we shouldn’t maximise expected utility when we’re faced with lots of events with dependent probabilities? This seems like an unusual stance.
I would limit this to cases where the dependency involves trusting an agent’s judgment (or honesty). I am not very good at figuring such a thing out and in cases like this whether I trust the agent has a large impact on the final decision.
the mugger can name arbitrarily high numbers of people that they might torture, whereas you can figure out exactly how many non-human animals suffer and die as a result of your dietary choices
You can name an arbitrary figure for what the likelihood is that animals suffer, said arbitrary figure being tailored to be small yet large enough that multiplying it by the number of animals I eat leads to the conclusion that eating them is bad.
It’s true that in this case you are arbitrarily picking the small figure rather than the large figure as in a typical Pascal’s Mugging, but it still amounts to picking the right figure to get the right answer.
I would limit this to cases where the dependency involves trusting an agent’s judgment (or honesty). I am not very good at figuring such a thing out and in cases like this whether I trust the agent has a large impact on the final decision.
But in this case, advocates for veganism are not being agents in the sense of implementing good/bad outcomes if you choose correctly/incorrectly, or personally gaining from you making one choice or another. Rather, we are just stating an argument and letting you judge how persuasive you think that argument is.
You can name an arbitrary figure for what the likelihood is that animals suffer, said arbitrary figure being tailored to be small yet large enough that multiplying it by the number of animals I eat leads to the conclusion that eating them is bad.
The probability that non-human animals suffer can’t be arbitrarily large (since it’s trivially bounded by 1), and for the purposes of the pro-veganism argument it can’t be arbitrarily small, as explained in my previous comment, making this argument decidedly non-Pascalian. Furthermore, I’m not picking your probability that non-human animals suffer, I’m just claiming that for any reasonable probability assignment, veganism comes out as the right thing to do. If I’m right about this, then I think that the conclusion follows, whether or not you want to call it Pascalian.
But in this case, advocates for veganism are not being agents in the sense of implementing good/bad outcomes if you choose correctly/incorrectly, or personally gaining from you making one choice or another.
Human bias serves the role of personal gain in this case. (Also, the nature of vegetarianism makes it especially prone to such bias.)
The probability that non-human animals suffer can’t be arbitrarily large (since it’s trivially bounded by 1),
It can be arbitrarily chosen in such a way as to always force the conclusion that eating animals is wrong. Being arbitrary enough for this purpose does not require being able to choose values greater than 1.
It can be arbitrarily chosen in such a way as to always force the conclusion that eating animals is wrong. Being arbitrary enough for this purpose does not require being able to choose values greater than 1.
You are talking as if I am setting your probability that non-human animals are wrong. I am not doing that: all that I am saying is that for any reasonable probability assignment, you get the conclusion that you shouldn’t eat non-human animals or their secretions. If this is true, then eating non-human animals or their secretions is wrong.
You are talking as if I am setting your probability that non-human animals are wrong.
You are arbitrarily selecting a number for the probability that animals suffer. This number can be chosen by you such that when multiplied by the number of animals people eat, it always results in the conclusion that the expected damage is enough that people should not eat animals.
This is similar to Pascal’s Mugging, except that you are choosing the smaller number instead of the larger number.
for any reasonable probability assignment, you get the conclusion that you shouldn’t eat non-human animals
This is not true. For instance, a probability assignment of 1/100000000 to the probability that animals suffer like humans would not lead to that conclusion. However, 1/100000000 falls outside the range that most people think of when they think of a small but finite probability, so it sounds unreasonable even though it is not.
Aside from painting “LessWrong types” in really broad, unflattering strokes, I thought the author made several good points. Note though that I am a ~15 year vegetarian (and sometime vegan) myself and I definitely identify with his argument, so there’s the opportunity for subjective validation to creep in. I also find many perference-utlitarian viewpoints persuasive, though I wouldn’t yet identify as one.
I think the 20% thing and the 1-in-20 thing were just hypothetical, so we shouldn’t get too hung up on them; I think his case is just as strong without any numbers. There is some uncertainty about the continuum of animal cognition and how it relates to their capacity to suffer.
My own personal voice-inside-my-head reasons for vegetarianism can be summarized as follows: “I am an animal, but a unique kind of animal who can understand what it means to feel pain and to die and who doesn’t want that to happen to himself or to any other animals. My unique kind of animal can also live a happy, healthy life at very little personal expense without causing other animals to feel pain or to die.” Thus, Rob’s first 4 premises (particularly 2 and 3) resonated with me.
I don’t believe other animals, even other mammals, have anything like human consciousness. Nor do I believe they should be accorded human rights. But I know that at the end of the day, biologically I am a mammal; if you’re warm-blooded and you’ve got hair and a neocortex, then I’m really going to avoid hurting/killing you. If you have a spine and a pulse, I’m giving you the benefit of the doubt.
Having a small uncertainty about animal suffering and then saying that because of the large number of animals we eat, even a small uncertainty is enough to make eating animals bad, is a variation on Pascal’s Mugging.
Yeah, this is why I used the number ‘1-in-20’. It’s somewhat arbitrary, but it serves the function of ruling out Pascal-level uncertainty.
I can understand why you shouldn’t incentivise someone to possibly torture lots of people by being the sort of person who gives in to Pascal’s mugging (in the original formulation). That being said, here you seem to be using Pascal’s mugging to refer to doing anything with high expected utility but low probability of success. Why is that irrational?
Actually, I’m using it to refer to something which has high expected utility, low probability of success, and a third criterion: you are uncertain about what the probability really is. A sweepstakes with 100 tickets has a 1% chance of winning. A sweepstakes which has 2 tickets but where you think there’s a 98% chance that the person running the sweepstakes is a fraudster also has a 1% chance of winning, but that seems fundamentally different from the first case.
I think this is a misunderstanding of the idea of probability. The real world is either one way or another, either we will actually win the sweepstakes or we won’t. Probability comes into the picture in our heads, telling us how likely we think a certain outcome is, and how much we weight it when making decisions. As such, I don’t think it makes sense to talk about having uncertainty about what a probability really is, except for the case of a lack of introspection.
Also, going back to Robby’s post:
This seems like an important difference to what you’re talking about. In this case, the probabilities are bounded below by a not-ridiculously-small number, that (Robby claims) is high enough that we should not eat meat. If you grant that your probability does in fact obey such a bound, and that that bound suffices for the case for veg*nism, then I think the result follows, whether or not you call it a Pascal’s mugging.
If you don’t like the phrase “uncertainty about the probability”, think of it as a probability that is made up of particular kinds of multiple components.
The second sweepstakes example has two components, uncertainty about which entry will be picked and uncertainty about whether the manager is honest. The first one only has uncertainty about which entry will be picked. You could split up the first example mathematically (uncertainty about whether your ticket falls in the last two entries and uncertainty about which of the last two entries your ticket is) but the two parts you get are conceptually much closer than in the second example.
Like the possibility that the sweepstakes manager is dishonest, “we don’t know enough about how cattle cognize” is all or nothing; if you do mulitple trials, the distribution is a lot more lumpy. If all cows had exactly 20% of the capacity of humans, then five cows would have 100% in total. If there’s a 20% chance that cows have as much as humans and an 80% chance that they have nothing at all, that’s still a 20% chance, but five cows would have a lumpy distribution—instead of five cows having a guaranteed 100%, there would be a 20% chance of having 500% and an 80% chance of nothing.
In some sense, each case has a probability bounded by 20% for a single cow. But in the first case, there’s no chance of 0%, and in the second case, not only is there a chance of 0%, but the chance of 0% doesn’t decrease as you add more cows. The implications of “the probability is bounded by 20%” that you probably want to draw do not follow in the latter case.
I still don’t see why this matters? To put things concretely, if I would be willing to buy the ticket in the first sweepstakes, why wouldn’t I be willing to do so in the second? Sure, the uncertainty comes from different sources, but what does this matter for me and how much money I make?
If I understand you correctly, you seem to be drawing a slightly distinction here than I thought you were, claiming that the distinction is between 100% probability of a cow consciousness that is 20% as intense as human consciousness, as opposed to a 20% probability of a cow consciousness that is 100% as intense as human consciousness (for some definition of intensity). Am I understanding you correctly?
In any case, I still think that the implications that I want to draw do in fact follow. In the latter case, I would think that eating meat has a 20% chance of producing a really horrible effect, and an 80% chance of being mildly convenient for you, so you definitely shouldn’t eat meat. Is there something that I am missing?
ETA: Again, to put things more concretely, consider theory X: that whenever 50 loaves of bread are bought, someone creates a human, keeps them in horrible conditions, and then kills them. Your probability for theory X being true is 20%. If you remove bread from your diet, you will have to learn a whole bunch of new recipes, and your diet might be slightly low in carbohydrates. Do you think that it is OK to continue eating bread? If not, your disagreement with the case for veg*nism is a different assessment of the facts, rather than a condemnation of the sort of probabilistic reasoning that is used.
I imagine the line of reasoning you want me to use to be something like this:
“Well, the probability of cow sentience is bounded by 20%, so you shouldn’t eat cows.”
“How do you get to that conclusion? After all, it’s not certain. In fact, it’s less certain than not. The most probable result, at 80%, is that no damage is done to cows whatsoever.”
“Well, you should calculate the expectation. 20% large effect + 80% no effect is still enough of a bad effect to care about.”
“But I’m never going to get that expectation. I’m either going to get the full effect or nothing at all.”
“If you eat meat many times, the damage done will add up. Although you could be lucky if you only do it once and cause no damage, if you do it many times you’re almost certain to cause damage. And the average amount of damage done will be equal to that expectation multiplied by the number of trials.”
If there’s a component of uncertainty over the probability, that last step doesn’t really work, since many trials are still all or nothing when combined.
I wouldn’t say the last step that you attribute to me. Firstly, if I were going to talk about the long run, I would say that in the long run, you should maximise expected utility because you’ll probably get a lot of utility that way. That being said, I don’t want to talk about the long run at all, because we don’t make decisions for the long run. For instance, you could decide to have a bacon omelette for dinner today and then stay veg*n for the rest of your life, and the argument that you attribute to me wouldn’t work in that case, although I would urge you to not eat the bacon omelette. (In addition, the line of reasoning that I would actually want you to use would involve attributing >50% probability of cow, chicken, pig, sheep, and fish sentience, but that’s beside the point).
Rather, I would make a case like this: when you make a choice under uncertainty, you have a whole bunch of possible outcomes that could happen after the choice is made. Some of these outcomes will be better when you choose one option, and some will be better when you choose another. So, we have to weigh up which outcomes we care about to decide which choice is better. I claim that you should weigh each outcome in proportion to your probability of it occurring, and the difference in utility that the choice makes. Therefore, even if you only assign the “cows are sentient” or “theory X is true” outcomes a probability of 20%, the bad outcomes are so bad that we shouldn’t risk them. The fact that you assign probability >50% to no damage happening isn’t a suffcient condition to establish “taking the risk is OK”.
The point is that given the way these probabilities add up, not only wouldn’t that work for a single bacon omelette, it wouldn’t work for a lifetime of bacon omelettes. They’re either all harmful or all non-harmful.
Your reasoning doesn’t depend on the exact number 20. It just says that the utility of the outcome should be multiplied by its probability. If the probability was 1% or 0.01% you could say exactly the same thing and it would be just as valid. In other words, your reasoning proves too much; it would imply accepting Pascal’s Mugging. And I don’t accept Pascal’s Mugging.
I know. Are you implying that we shouldn’t maximise expected utility when we’re faced with lots of events with dependent probabilities? This seems like an unusual stance.
My reasoning doesn’t depend on the exact number 20, but the probability can’t be arbitrarily low either. If the probability of cow sentience were only 1⁄1,000,000,000,000, then the expected utility of being veg*n would be lower than that of eating meat, since you would have to learn new recipes and worry about nutrition, and that would be costly enough to outweigh the very small chance of a very bad outcome.
Again, this depends on what you mean by Pascal’s Mugging. If you mean the original version, then my reasoning does not necessarily imply being mugged, since the mugger can name arbitrarily high numbers of people that they might torture, whereas you can figure out exactly how many non-human animals suffer and die as a result of your dietary choices (if you’re an average American, approximately 200, only 30 if you don’t eat seafood, and only 1.4 if you also don’t eat chicken or eggs, according to this document), and nobody can boost this number in response to you claiming that you have a really small probability of them being sentient.
However, if by Pascal’s Mugging you mean “maximising expected utility when the probability of success is small but bounded from below and you have different sources of uncertainty”, then yes, you should accept Pascal’s Mugging, and I have never seen a convincing argument that you shouldn’t. Also, please don’t call that Pascal’s Mugging, since it is importantly different from its namesake.
I would limit this to cases where the dependency involves trusting an agent’s judgment (or honesty). I am not very good at figuring such a thing out and in cases like this whether I trust the agent has a large impact on the final decision.
You can name an arbitrary figure for what the likelihood is that animals suffer, said arbitrary figure being tailored to be small yet large enough that multiplying it by the number of animals I eat leads to the conclusion that eating them is bad.
It’s true that in this case you are arbitrarily picking the small figure rather than the large figure as in a typical Pascal’s Mugging, but it still amounts to picking the right figure to get the right answer.
But in this case, advocates for veganism are not being agents in the sense of implementing good/bad outcomes if you choose correctly/incorrectly, or personally gaining from you making one choice or another. Rather, we are just stating an argument and letting you judge how persuasive you think that argument is.
The probability that non-human animals suffer can’t be arbitrarily large (since it’s trivially bounded by 1), and for the purposes of the pro-veganism argument it can’t be arbitrarily small, as explained in my previous comment, making this argument decidedly non-Pascalian. Furthermore, I’m not picking your probability that non-human animals suffer, I’m just claiming that for any reasonable probability assignment, veganism comes out as the right thing to do. If I’m right about this, then I think that the conclusion follows, whether or not you want to call it Pascalian.
Human bias serves the role of personal gain in this case. (Also, the nature of vegetarianism makes it especially prone to such bias.)
It can be arbitrarily chosen in such a way as to always force the conclusion that eating animals is wrong. Being arbitrary enough for this purpose does not require being able to choose values greater than 1.
You are talking as if I am setting your probability that non-human animals are wrong. I am not doing that: all that I am saying is that for any reasonable probability assignment, you get the conclusion that you shouldn’t eat non-human animals or their secretions. If this is true, then eating non-human animals or their secretions is wrong.
You are arbitrarily selecting a number for the probability that animals suffer. This number can be chosen by you such that when multiplied by the number of animals people eat, it always results in the conclusion that the expected damage is enough that people should not eat animals.
This is similar to Pascal’s Mugging, except that you are choosing the smaller number instead of the larger number.
This is not true. For instance, a probability assignment of 1/100000000 to the probability that animals suffer like humans would not lead to that conclusion. However, 1/100000000 falls outside the range that most people think of when they think of a small but finite probability, so it sounds unreasonable even though it is not.