That’s all true, but it’s essentially nitpicking. Nothing important hangs on those estimates being correct.
But of course it does. If those estimates are wrong (and if they are, why should they only be wrong by such a piddling factor as, say, 5? why not instead 10^5? Or 10^50? beware of anchoring bias!), or, even worse, if they are simply meaningless, then the conclusions of the report are of no value and no relevance.
Consider what you’re saying. A group of researchers and philosophers work on this massive report, with its innumerable details, numbers, long chains of reasoning, a mountain of literature reviewed, etc., and you say—oh, it doesn’t matter if any of these numbers they came up with are right? Is that really your position?
(Would you say the same thing if the report’s conclusion was that animals basically don’t matter morally? If that turned out to be the way the numbers come out?)
I sure hope you’re not going to keep eating farmed fish based on the estimates being imperfect.
I sure hope you’re not suggesting that I should stop eating farmed fish based on such philosophically shaky reasoning!
Why would you assume fish don’t suffer?
I do not assume this; I conclude it.
The cortex isn’t doing something magically different to raise the suffering of mammals above some threshold into the realm of “real” suffering.
Citation needed, I’m afraid. (And the word “magically” is, of course, a fnord.)
Animals very likely suffer, it’s just emotionally unpleasant for us to accept that, so we find excuses to not think about factory farming.
On the contrary, I’m perfectly well aware of factory farming.
Consider that people who do not share your conclusions may actually, in fact, disagree with you, both about values and about empirical claims.
Yes, it all hinges on that missing citation about continuity of brain function. After 23 years of studying brain computations, I’ve reached the conclusion that a sharp discuntinuity relevant to suffering is wishful thinking. But that requires a good deal more discussion.
This is a much deeper issue. I probably shouldn’t have commented about it so briefly. I’ve resisted commenting on this on LW because it’s an unpopular opinion, and it’s practically way less important than aligning AGI so we survive to work through our ethics.
For now I’ll just ask you to consider what direction your bias pulls in. I’d far prefer to believe that fish don’t suffer. And I humbly suggest that rationalists aren’t immune to confirmation bias.
Yes, it all hinges on that missing citation about continuity of brain function.
Just on this? Nothing else?
It seems to me that there are quite a few controversial, questionable, or unjustified claims and steps of reasoning involved, beyond this one!
If you disagree—well, I await your persuasive argument to that effect…
For now I’ll just ask you to consider what direction your bias pulls in. I’d far prefer to believe that fish don’t suffer. And I humbly suggest that rationalists aren’t immune to confirmation bias.
Certainly I am not immune to confirmation bias! (I prefer to avoid labeling myself a “rationalist”, though I don’t necessarily object to the term as a description of the social-graph sort…)
But that by itself tells me nothing. To change my beliefs about something, you do actually have to convince me that there’s some reason to update. Just saying “ah, but you could be biased” isn’t enough. Of course I could be biased. This is true of any of my beliefs, on any topic.
Meanwhile, here’s something for you to consider. Suppose you convinced me that fish can suffer. (Let’s avoid specifying how much it turns out that they can suffer, or whether comparing their suffering to that of humans is meaningful; we will say only that they do, in some basically ordinary and not exotic or bizarre sense of the word, exhibit some degree of suffering.)
But of course it does. If those estimates are wrong (and if they are, why should they only be wrong by such a piddling factor as, say, 5? why not instead 10^5? Or 10^50? beware of anchoring bias!), or, even worse, if they are simply meaningless, then the conclusions of the report are of no value and no relevance.
Consider what you’re saying. A group of researchers and philosophers work on this massive report, with its innumerable details, numbers, long chains of reasoning, a mountain of literature reviewed, etc., and you say—oh, it doesn’t matter if any of these numbers they came up with are right? Is that really your position?
(Would you say the same thing if the report’s conclusion was that animals basically don’t matter morally? If that turned out to be the way the numbers come out?)
I sure hope you’re not suggesting that I should stop eating farmed fish based on such philosophically shaky reasoning!
I do not assume this; I conclude it.
Citation needed, I’m afraid. (And the word “magically” is, of course, a fnord.)
On the contrary, I’m perfectly well aware of factory farming.
Consider that people who do not share your conclusions may actually, in fact, disagree with you, both about values and about empirical claims.
Why do you conclude that fish don’t suffer?
Yes, it all hinges on that missing citation about continuity of brain function. After 23 years of studying brain computations, I’ve reached the conclusion that a sharp discuntinuity relevant to suffering is wishful thinking. But that requires a good deal more discussion.
This is a much deeper issue. I probably shouldn’t have commented about it so briefly. I’ve resisted commenting on this on LW because it’s an unpopular opinion, and it’s practically way less important than aligning AGI so we survive to work through our ethics.
For now I’ll just ask you to consider what direction your bias pulls in. I’d far prefer to believe that fish don’t suffer. And I humbly suggest that rationalists aren’t immune to confirmation bias.
Just on this? Nothing else?
It seems to me that there are quite a few controversial, questionable, or unjustified claims and steps of reasoning involved, beyond this one!
If you disagree—well, I await your persuasive argument to that effect…
Certainly I am not immune to confirmation bias! (I prefer to avoid labeling myself a “rationalist”, though I don’t necessarily object to the term as a description of the social-graph sort…)
But that by itself tells me nothing. To change my beliefs about something, you do actually have to convince me that there’s some reason to update. Just saying “ah, but you could be biased” isn’t enough. Of course I could be biased. This is true of any of my beliefs, on any topic.
Meanwhile, here’s something for you to consider. Suppose you convinced me that fish can suffer. (Let’s avoid specifying how much it turns out that they can suffer, or whether comparing their suffering to that of humans is meaningful; we will say only that they do, in some basically ordinary and not exotic or bizarre sense of the word, exhibit some degree of suffering.)
Would I stop eating fish? Nope.