The most up-to-date report—and also one of the most conservative—found that the least sentient fish that they surveyed experience suffering about one-twentieth as intensely as human suffering.
You cite this as though this is a fact that was discovered through some sort of empirical research, but of course it’s nothing of the sort.
Even the claim that fish can suffer, at all (much less “one-twentieth as intensely” as people suffer!) is not sufficiently well-established to take as given. (It seems false to me, for example.)
And the Rethink Priorites report you link in fact says:
These estimates are, essentially, estimates of the differences in the possible intensities of these animals’ pleasures and pains relative to humans’ pleasures and pains. Then, we add a number of controversial (albeit plausible) philosophical assumptions (including hedonism, valence symmetry, and others discussed here) to reach conclusions about animals’ welfare ranges relative to human’s welfare range.
And, indeed, they sure do make many philosophical assumptions that are very controversial, such as:
For simplicity’s sake, we assume that humans’ welfare range is symmetrical around the neutral point.
Hedonism, according to which welfare is determined wholly by positively and negatively valenced experiences (roughly, experiences that feel good and bad to the subject).
Valence symmetry, according to which positively and negatively valenced experiences of equal intensities have symmetrical impacts on welfare.
And many more.
What’s more, the actual numbers were generated by a very complicated process of surveying an extensive literature in animal psychology and related fields, and converting some exceedingly complex, (in many cases) very uncertain, and (in most cases) qualitative findings, into numbers. This process involved many essentially arbitrary steps, judgment calls, philosophically questionable reifications, etc., etc., such that it is extremely unclear whether the resulting values even mean anything at all, much less whether they map well to anything like our usual concept of “suffering”, or any other concept at all.
So this numerical ratio which you so blithely quote is not even close to being well-grounded enough to be able to support anything like the sort of argument you give.
I think the poster acknowledges that the number 20 is somewhat ad hoc and handwavy, for example they go on to do the calculations later in their post assuming fish suffering is 100 times less than human suffering. So they have given the number a factor 5 uncertainty.
Although, when I was reading the post, I saw that as more a rhetorical “trap” than a real point. As soon as the poster says “fish suffering is 20 times less important than human suffering” it invites everyone to focus on the number 20, and start trying to work out if a ratio of 100 or 1,000 would align better with their own instincts. The trap is that anyone who accepts the real premise: that human suffering and fish suffering are somehow interchangeable up to some exchange rate, is already going to be snared by the argument because even gigantic factors are going to make fish farming work out as a bigger problem than say, gun homicides or traffic accidents.
They don’t give a factor 5 uncertainty. They add a 100x discount on top of the 20x discount — counting fish suffering as 2000x less important than human suffering.
There’s a trollish answer to this point (that I somewhat agree with) which is to just say: okay, let’s adopt moral uncertainty over all of the philosophically difficult premises too, so let’s say there’s only a 1% chance that raw intensity of pain matters and 99% that you need to be self reflective in certain ways to have qualia and suffer in a way that matters morally, or you should treat it as scaling with cortical neurons, or only humans matter.
...and probably the math still works out very unfavorably.
I say trollish because a decision procedure like this strikes me as likely to swamp and overwhelm you with way too many different considerations pointing in all sorts of crazy directions and to be just generally unworkable so I feel like something has to be going wrong here.
Still, I do feel like the fact that the answer is non-obvious in this way and does rely on philosophical reflection means you can’t draw many deep abiding conclusions about human empathy or the “worthiness” of human civilization (whatever that really means) from how we treat fish
One objection to this method of dealing with moral uncertainty comes from this great post on the EA forum: it covers an old paper by Tyler Cowen which argues that once you give any consideration to utilitarianism, it’s common knowledge that you’re susceptible to moral dilemmas like the repugnant conclusion, and (here comes the interesting claim) there’s no escape from this, including by invoking moral uncertainty:
A popular response in the Effective Altruist community to problems that seem to involve something like dogmatism or ‘value dictatorship’—indeed, the response William MacAskill gave when Cowen himself made some of these points in an interview—is to invoke moral uncertainty. If your moral view faces challenges like these, you should downweigh your confidence in it; and then, if you place some weight on multiple moral views, you should somehow aggregate their recommendations, to reach an acceptable compromise between ethical outlooks.
Various theories of moral uncertainty exist, outlining how this aggregation works; but none of them actually escape the issue. The theories of moral uncertainty that Effective Altruists rely on are themselves frameworks for commensurating values and systematically ranking options, and (as such) they are also vulnerable to ‘value dictatorship’, where after some point the choices recommended by utilitarianism come to swamp the recommendations of other theories. In the literature, this phenomenon is well-known as ‘fanaticism’.[10]
Once you let utilitarian calculations into your moral theory at all, there is no principled way to prevent them from swallowing everything else. And, in turn, there’s no way to have these calculations swallow everything without them leading to pretty absurd results. While some of you might bite the bullet on the repugnant conclusion or the experience machine, it is very likely that you will eventually find a bullet that you don’t want to bite, and you will want to get off the train to crazy town; but you cannot consistently do this without giving up the idea that scale matters, and that it doesn’t just stop mattering after some point.
So, what other options are there? Well, this is where Cowen’s paper comes in: it turns out, there are none. For any moral theory with universal domain where utility matters at all, either the marginal value of utility diminishes rapidly (asymptotically) towards zero, or considerations of utility come to swamp all other values.
I say trollish because a decision procedure like this strikes me as likely to swamp and overwhelm you with way too many different considerations pointing in all sorts of crazy directions and to be just generally unworkable so I feel like something has to be going wrong here.
I feel like this decision procedure is difficult but necessary, in that I can’t think of any other decision procedure you can follow that won’t cause you to pass up on enormous amounts of utility, cause you to violate lots of deontological constraints, or whatever you decide morality is made of on reflection. Surely if you actually think some consideration is 1% to be true, you should act on it?
There are lots of people who have tried to esstimate the intensity of fish suffering. I think the rethink priorities report is the best methodologically—but the others tend to estimate that fish suffer more. If you note in the RP report, they say that those controversial philosophical assumptions don’t affect the end result much. I agree that it’s very uncertain—this report gives the lowest estimate I was able to find.
If you note in the RP report, they say that those controversial philosophical assumptions don’t affect the end result much.
They say this of some of their assumptions (a small subset, really), not all of them. (And I am not sure I believe them even about that subset.)
I think the rethink priorities report is the best methodologically
This says more about the fundamental problems with the entire endeavor (not to mention the epistemic sloppiness of animal-rights advocacy in general) than it does about the RP report’s conclusions.
I agree that it’s very uncertain—this report gives the lowest estimate I was able to find.
Surely this can’t be true.
For example, my estimate is that the “intensity of fish suffering” is “zero” and/or “undefined” (either answer may be appropriate, depending on how you construe the question). That’s much lower, I think you’ll agree, than the number given by the RP report!
Now, perhaps you don’t consider me a credible source on this question. That’s fine. My point, however, is this: if some people think that “fish can suffer” is both true and something that can be meaningfully measured or estimated, but other people think that “fish can suffer” is silly, incoherent nonsense, then the following things will be true:
The former sort of people will be the ones engaging in such estimation/measurement efforts, and producing such reports.
The former sort of people will, almost necessarily, be the sort of people who care about animal rights, animal welfare, animal advocacy, etc.
That means that the set of people who take the question (“how intensely can fish suffer”) seriously, and attempt to produce an answer, will be selected for being the sorts of people who are inclined to come up with an answer that implies we should care about the suffering of fish.
Thus when you say “well, I went looking for serious attempts to measure/estimate the answer to the question ‘how intensely do fish suffer’, and here are the answers I found”, you will of course get stuff like this “one-twentieth as much as humans”, and are pretty much guaranteed not to get answers like “not at all, in fact the question is confused to begin with”—but this outcome will be almost uncorrelated with which answer is actually true.
That’s all true, but it’s essentially nitpicking. Nothing important hangs on those estimates being correct. I sure hope you’re not going to keep eating farmed fish based on the estimates being imperfect. Why would you assume fish don’t suffer? The cortex isn’t doing something magically different to raise the suffering of mammals above some threshold into the realm of “real” suffering. Much less language and conceptual thinking making human suffering the only real kind.
Animals very likely suffer, it’s just emotionally unpleasant for us to accept that, so we find excuses to not think about factory farming.
That’s all true, but it’s essentially nitpicking. Nothing important hangs on those estimates being correct.
But of course it does. If those estimates are wrong (and if they are, why should they only be wrong by such a piddling factor as, say, 5? why not instead 10^5? Or 10^50? beware of anchoring bias!), or, even worse, if they are simply meaningless, then the conclusions of the report are of no value and no relevance.
Consider what you’re saying. A group of researchers and philosophers work on this massive report, with its innumerable details, numbers, long chains of reasoning, a mountain of literature reviewed, etc., and you say—oh, it doesn’t matter if any of these numbers they came up with are right? Is that really your position?
(Would you say the same thing if the report’s conclusion was that animals basically don’t matter morally? If that turned out to be the way the numbers come out?)
I sure hope you’re not going to keep eating farmed fish based on the estimates being imperfect.
I sure hope you’re not suggesting that I should stop eating farmed fish based on such philosophically shaky reasoning!
Why would you assume fish don’t suffer?
I do not assume this; I conclude it.
The cortex isn’t doing something magically different to raise the suffering of mammals above some threshold into the realm of “real” suffering.
Citation needed, I’m afraid. (And the word “magically” is, of course, a fnord.)
Animals very likely suffer, it’s just emotionally unpleasant for us to accept that, so we find excuses to not think about factory farming.
On the contrary, I’m perfectly well aware of factory farming.
Consider that people who do not share your conclusions may actually, in fact, disagree with you, both about values and about empirical claims.
Yes, it all hinges on that missing citation about continuity of brain function. After 23 years of studying brain computations, I’ve reached the conclusion that a sharp discuntinuity relevant to suffering is wishful thinking. But that requires a good deal more discussion.
This is a much deeper issue. I probably shouldn’t have commented about it so briefly. I’ve resisted commenting on this on LW because it’s an unpopular opinion, and it’s practically way less important than aligning AGI so we survive to work through our ethics.
For now I’ll just ask you to consider what direction your bias pulls in. I’d far prefer to believe that fish don’t suffer. And I humbly suggest that rationalists aren’t immune to confirmation bias.
Yes, it all hinges on that missing citation about continuity of brain function.
Just on this? Nothing else?
It seems to me that there are quite a few controversial, questionable, or unjustified claims and steps of reasoning involved, beyond this one!
If you disagree—well, I await your persuasive argument to that effect…
For now I’ll just ask you to consider what direction your bias pulls in. I’d far prefer to believe that fish don’t suffer. And I humbly suggest that rationalists aren’t immune to confirmation bias.
Certainly I am not immune to confirmation bias! (I prefer to avoid labeling myself a “rationalist”, though I don’t necessarily object to the term as a description of the social-graph sort…)
But that by itself tells me nothing. To change my beliefs about something, you do actually have to convince me that there’s some reason to update. Just saying “ah, but you could be biased” isn’t enough. Of course I could be biased. This is true of any of my beliefs, on any topic.
Meanwhile, here’s something for you to consider. Suppose you convinced me that fish can suffer. (Let’s avoid specifying how much it turns out that they can suffer, or whether comparing their suffering to that of humans is meaningful; we will say only that they do, in some basically ordinary and not exotic or bizarre sense of the word, exhibit some degree of suffering.)
You cite this as though this is a fact that was discovered through some sort of empirical research, but of course it’s nothing of the sort.
Even the claim that fish can suffer, at all (much less “one-twentieth as intensely” as people suffer!) is not sufficiently well-established to take as given. (It seems false to me, for example.)
And the Rethink Priorites report you link in fact says:
And, indeed, they sure do make many philosophical assumptions that are very controversial, such as:
And many more.
What’s more, the actual numbers were generated by a very complicated process of surveying an extensive literature in animal psychology and related fields, and converting some exceedingly complex, (in many cases) very uncertain, and (in most cases) qualitative findings, into numbers. This process involved many essentially arbitrary steps, judgment calls, philosophically questionable reifications, etc., etc., such that it is extremely unclear whether the resulting values even mean anything at all, much less whether they map well to anything like our usual concept of “suffering”, or any other concept at all.
So this numerical ratio which you so blithely quote is not even close to being well-grounded enough to be able to support anything like the sort of argument you give.
I think the poster acknowledges that the number 20 is somewhat ad hoc and handwavy, for example they go on to do the calculations later in their post assuming fish suffering is 100 times less than human suffering. So they have given the number a factor 5 uncertainty.
Although, when I was reading the post, I saw that as more a rhetorical “trap” than a real point. As soon as the poster says “fish suffering is 20 times less important than human suffering” it invites everyone to focus on the number 20, and start trying to work out if a ratio of 100 or 1,000 would align better with their own instincts. The trap is that anyone who accepts the real premise: that human suffering and fish suffering are somehow interchangeable up to some exchange rate, is already going to be snared by the argument because even gigantic factors are going to make fish farming work out as a bigger problem than say, gun homicides or traffic accidents.
They don’t give a factor 5 uncertainty. They add a 100x discount on top of the 20x discount — counting fish suffering as 2000x less important than human suffering.
There’s a trollish answer to this point (that I somewhat agree with) which is to just say: okay, let’s adopt moral uncertainty over all of the philosophically difficult premises too, so let’s say there’s only a 1% chance that raw intensity of pain matters and 99% that you need to be self reflective in certain ways to have qualia and suffer in a way that matters morally, or you should treat it as scaling with cortical neurons, or only humans matter.
...and probably the math still works out very unfavorably.
I say trollish because a decision procedure like this strikes me as likely to swamp and overwhelm you with way too many different considerations pointing in all sorts of crazy directions and to be just generally unworkable so I feel like something has to be going wrong here.
Still, I do feel like the fact that the answer is non-obvious in this way and does rely on philosophical reflection means you can’t draw many deep abiding conclusions about human empathy or the “worthiness” of human civilization (whatever that really means) from how we treat fish
One objection to this method of dealing with moral uncertainty comes from this great post on the EA forum: it covers an old paper by Tyler Cowen which argues that once you give any consideration to utilitarianism, it’s common knowledge that you’re susceptible to moral dilemmas like the repugnant conclusion, and (here comes the interesting claim) there’s no escape from this, including by invoking moral uncertainty:
I feel like this decision procedure is difficult but necessary, in that I can’t think of any other decision procedure you can follow that won’t cause you to pass up on enormous amounts of utility, cause you to violate lots of deontological constraints, or whatever you decide morality is made of on reflection. Surely if you actually think some consideration is 1% to be true, you should act on it?
There are lots of people who have tried to esstimate the intensity of fish suffering. I think the rethink priorities report is the best methodologically—but the others tend to estimate that fish suffer more. If you note in the RP report, they say that those controversial philosophical assumptions don’t affect the end result much. I agree that it’s very uncertain—this report gives the lowest estimate I was able to find.
They say this of some of their assumptions (a small subset, really), not all of them. (And I am not sure I believe them even about that subset.)
This says more about the fundamental problems with the entire endeavor (not to mention the epistemic sloppiness of animal-rights advocacy in general) than it does about the RP report’s conclusions.
Surely this can’t be true.
For example, my estimate is that the “intensity of fish suffering” is “zero” and/or “undefined” (either answer may be appropriate, depending on how you construe the question). That’s much lower, I think you’ll agree, than the number given by the RP report!
Now, perhaps you don’t consider me a credible source on this question. That’s fine. My point, however, is this: if some people think that “fish can suffer” is both true and something that can be meaningfully measured or estimated, but other people think that “fish can suffer” is silly, incoherent nonsense, then the following things will be true:
The former sort of people will be the ones engaging in such estimation/measurement efforts, and producing such reports.
The former sort of people will, almost necessarily, be the sort of people who care about animal rights, animal welfare, animal advocacy, etc.
That means that the set of people who take the question (“how intensely can fish suffer”) seriously, and attempt to produce an answer, will be selected for being the sorts of people who are inclined to come up with an answer that implies we should care about the suffering of fish.
Thus when you say “well, I went looking for serious attempts to measure/estimate the answer to the question ‘how intensely do fish suffer’, and here are the answers I found”, you will of course get stuff like this “one-twentieth as much as humans”, and are pretty much guaranteed not to get answers like “not at all, in fact the question is confused to begin with”—but this outcome will be almost uncorrelated with which answer is actually true.
That’s all true, but it’s essentially nitpicking. Nothing important hangs on those estimates being correct. I sure hope you’re not going to keep eating farmed fish based on the estimates being imperfect. Why would you assume fish don’t suffer? The cortex isn’t doing something magically different to raise the suffering of mammals above some threshold into the realm of “real” suffering. Much less language and conceptual thinking making human suffering the only real kind.
Animals very likely suffer, it’s just emotionally unpleasant for us to accept that, so we find excuses to not think about factory farming.
But of course it does. If those estimates are wrong (and if they are, why should they only be wrong by such a piddling factor as, say, 5? why not instead 10^5? Or 10^50? beware of anchoring bias!), or, even worse, if they are simply meaningless, then the conclusions of the report are of no value and no relevance.
Consider what you’re saying. A group of researchers and philosophers work on this massive report, with its innumerable details, numbers, long chains of reasoning, a mountain of literature reviewed, etc., and you say—oh, it doesn’t matter if any of these numbers they came up with are right? Is that really your position?
(Would you say the same thing if the report’s conclusion was that animals basically don’t matter morally? If that turned out to be the way the numbers come out?)
I sure hope you’re not suggesting that I should stop eating farmed fish based on such philosophically shaky reasoning!
I do not assume this; I conclude it.
Citation needed, I’m afraid. (And the word “magically” is, of course, a fnord.)
On the contrary, I’m perfectly well aware of factory farming.
Consider that people who do not share your conclusions may actually, in fact, disagree with you, both about values and about empirical claims.
Yes, it all hinges on that missing citation about continuity of brain function. After 23 years of studying brain computations, I’ve reached the conclusion that a sharp discuntinuity relevant to suffering is wishful thinking. But that requires a good deal more discussion.
This is a much deeper issue. I probably shouldn’t have commented about it so briefly. I’ve resisted commenting on this on LW because it’s an unpopular opinion, and it’s practically way less important than aligning AGI so we survive to work through our ethics.
For now I’ll just ask you to consider what direction your bias pulls in. I’d far prefer to believe that fish don’t suffer. And I humbly suggest that rationalists aren’t immune to confirmation bias.
Just on this? Nothing else?
It seems to me that there are quite a few controversial, questionable, or unjustified claims and steps of reasoning involved, beyond this one!
If you disagree—well, I await your persuasive argument to that effect…
Certainly I am not immune to confirmation bias! (I prefer to avoid labeling myself a “rationalist”, though I don’t necessarily object to the term as a description of the social-graph sort…)
But that by itself tells me nothing. To change my beliefs about something, you do actually have to convince me that there’s some reason to update. Just saying “ah, but you could be biased” isn’t enough. Of course I could be biased. This is true of any of my beliefs, on any topic.
Meanwhile, here’s something for you to consider. Suppose you convinced me that fish can suffer. (Let’s avoid specifying how much it turns out that they can suffer, or whether comparing their suffering to that of humans is meaningful; we will say only that they do, in some basically ordinary and not exotic or bizarre sense of the word, exhibit some degree of suffering.)
Would I stop eating fish? Nope.
Why do you conclude that fish don’t suffer?