If one accepts Eliezer Yudkowsky’s view on consciousness, the complexity of suffering in particular is largely irrelevant. The claim “qualia requires reflectivity” implies all qualia require reflectivity. This includes qualia like “what is the color red like?” and “how do smooth and rough surfaces feel different?” These experiences seem like they have vastly different evolutionary pressures associated with them that are largely unrelated to social accounting.
If you find the question of whether suffering in particular is sufficiently complex that it exists in certain animals but not others by virtue of evolutionary pressure, you’re operating in a frame where these arguments are not superseded by the much more generic claim that complex social modeling is necessary to feel anything.
If you think Eliezer is very likely to be right, these additional meditations on the nature of suffering are mostly minutiae.
[EDIT to note: I’m mostly pointing this out because it appears that there is one group that uses “complex social pressures” to claim animals do not suffer because animals feel nothing and another group that uses “complex social pressures” to claim that animals do not specifically suffer because suffering specifically depend on these things. That these two groups of people just happen to start from a similar guiding principle and happen to reach a similar answer for very different reasons makes me extremely suspicious of the epistemics around the moral patienthood of animals.]
I don’t know what Eliezer’s view is exactly. The parts I do know sound plausible to me, but I don’t have high confidence in any particular view (though I feel pretty confident about illusionism).
My sense is that there are two popular views of ‘are animals moral patients?’ among EAs:
Animals are obviously moral patients, there’s no serious doubt about this.
It’s hard to be highly confident one way or the other about whether animals are moral patients, so we should think a lot about their welfare on EV grounds. E.g., even if the odds of chickens being moral patients is only 10%, that’s a lot of expected utility on the line.
(And then there are views like Eliezer’s, which IME are much less common.)
My view is basically 2. If you ask me to make my best guess about which species are conscious, then I’ll extremely tentatively guess that it’s only humans, and that consciousness evolved after language. But a wide variety of best guesses are compatible with the basic position in 2.
“The ability to reflect, pass mirror tests, etc. is important for consciousness” sounds relatively plausible to me, but I don’t know of a strong positive reason to accept it—if Eliezer has a detailed model here, I don’t know what it is. My own argument is different, and is something like: the structure, character, etc. of organisms’ minds is under very little direct selection pressure until organisms have language to describe themselves in detail to others; so if consciousness is any complex adaptation that involves reshaping organisms’ inner lives to fit some very specific set of criteria, then it’s likely to be a post-language adaptation. But again, this whole argument is just my current best guess, not something I feel comfortable betting on with any confidence.
I haven’t seen an argument for any 1-style view that seemed at all compelling to me, though I recognize that someone might have a complicated nonstandard model of consciousness that implies 1 (just as Eliezer has a complicated nonstandard model of consciousness that implies chickens aren’t moral patients).
The reason I talk about suffering (and not just consciousness) is:
I’m not confident in either line of reasoning, and both questions are relevant to ‘which species are moral patients?’.
I have nonstandard guesses (though not confident beliefs) about both topics, and if I don’t mention those guesses, people might assume my views are more conventional.
I think that looking at specific types of consciousness (like suffering) can help people think a lot more clearly about consciousness itself. E.g., thinking about scenarios like ‘part of your brain is conscious, but the bodily-damage-detection part isn’t conscious’ can help draw out people’s implicit models of how consciousness works.
Note that not all of my nonstandard views about suffering and consciousness point in the direction of ‘chickens may be less morally important than humans’. E.g., I’ve written before that I put higher probability than most people on ‘chickens are utility monsters, and we should care much more about an individual chicken than about an individual human’—I think this is a pretty straightforward implication of the ‘consciousness is weird and complicated’ view that leads to a bunch of my other conclusions in the OP.
Parts of the OP were also written years apart, and the original reason I wrote up some of the OP content about suffering wasn’t animal-related at all—rather, I was trying to figure out how much to worry about invisibly suffering subsystems of human brains. (Conclusion: It’s at least as worth-worrying-about as chickens, but it’s less worth-worrying-about than I initially thought.)
Thanks for clarifying. To the extent that you aren’t particularly sure about consciousness comes about, it makes sense to reason about all sorts of possibilities related to capacity for experience and intensity of suffering. In general, I’m just kinda surprised that Eliezer’s view is so unusual given that he is the Eliezer Yudkowsky of the rationalist community.
My impression is that the justification for the argument your mention is something along the lines of “the primary reason one would develop a coherent picture of their own mind is so they could convey a convincing story about themselves to others—which only became a relevant need once language developed.”
I was under the impression you were focused primarily on suffering from the first two sections and the similarity of the above logic to the discussion of pain-signaling earlier. When I think about your generic argument about consciousness, I get confused however. While I can imagine why would one benefit from an internal narrative around their goals, desires, etc, I’m not even sure how I’d go about squaring pressures for that capacity with respect to the many basic sensory qualia that people have (e.g. sense of sight, sense of touch) -- especially in the context of language.
I think things like ‘the ineffable redness of red’ are a side-effect or spandrel. On my account, evolution selected for various kinds of internal cohesion and temporal consistency, introspective accessibility and verbal reportability, moral justifiability and rhetorical compellingness, etc. in weaving together a messy brain into some sort of unified point of view (with an attendant unified personality, unified knowledge, etc.).
This exerted a lot of novel pressures and constrained the solution space a lot, but didn’t constrain it 100%, so you still end up with a lot of weird neither-fitness-improving-nor-fitness-reducing anomalies when you poke at introspection.
This is not a super satisfying response, and it has basically no detail to it, but it’s the least-surprising way I could imagine things shaking out when we have a mature understanding of the mind.
If one accepts Eliezer Yudkowsky’s view on consciousness, the complexity of suffering in particular is largely irrelevant. The claim “qualia requires reflectivity” implies all qualia require reflectivity. This includes qualia like “what is the color red like?” and “how do smooth and rough surfaces feel different?” These experiences seem like they have vastly different evolutionary pressures associated with them that are largely unrelated to social accounting.
If you find the question of whether suffering in particular is sufficiently complex that it exists in certain animals but not others by virtue of evolutionary pressure, you’re operating in a frame where these arguments are not superseded by the much more generic claim that complex social modeling is necessary to feel anything.
If you think Eliezer is very likely to be right, these additional meditations on the nature of suffering are mostly minutiae.
[EDIT to note: I’m mostly pointing this out because it appears that there is one group that uses “complex social pressures” to claim animals do not suffer because animals feel nothing and another group that uses “complex social pressures” to claim that animals do not specifically suffer because suffering specifically depend on these things. That these two groups of people just happen to start from a similar guiding principle and happen to reach a similar answer for very different reasons makes me extremely suspicious of the epistemics around the moral patienthood of animals.]
I don’t know what Eliezer’s view is exactly. The parts I do know sound plausible to me, but I don’t have high confidence in any particular view (though I feel pretty confident about illusionism).
My sense is that there are two popular views of ‘are animals moral patients?’ among EAs:
Animals are obviously moral patients, there’s no serious doubt about this.
It’s hard to be highly confident one way or the other about whether animals are moral patients, so we should think a lot about their welfare on EV grounds. E.g., even if the odds of chickens being moral patients is only 10%, that’s a lot of expected utility on the line.
(And then there are views like Eliezer’s, which IME are much less common.)
My view is basically 2. If you ask me to make my best guess about which species are conscious, then I’ll extremely tentatively guess that it’s only humans, and that consciousness evolved after language. But a wide variety of best guesses are compatible with the basic position in 2.
“The ability to reflect, pass mirror tests, etc. is important for consciousness” sounds relatively plausible to me, but I don’t know of a strong positive reason to accept it—if Eliezer has a detailed model here, I don’t know what it is. My own argument is different, and is something like: the structure, character, etc. of organisms’ minds is under very little direct selection pressure until organisms have language to describe themselves in detail to others; so if consciousness is any complex adaptation that involves reshaping organisms’ inner lives to fit some very specific set of criteria, then it’s likely to be a post-language adaptation. But again, this whole argument is just my current best guess, not something I feel comfortable betting on with any confidence.
I haven’t seen an argument for any 1-style view that seemed at all compelling to me, though I recognize that someone might have a complicated nonstandard model of consciousness that implies 1 (just as Eliezer has a complicated nonstandard model of consciousness that implies chickens aren’t moral patients).
The reason I talk about suffering (and not just consciousness) is:
I’m not confident in either line of reasoning, and both questions are relevant to ‘which species are moral patients?’.
I have nonstandard guesses (though not confident beliefs) about both topics, and if I don’t mention those guesses, people might assume my views are more conventional.
I think that looking at specific types of consciousness (like suffering) can help people think a lot more clearly about consciousness itself. E.g., thinking about scenarios like ‘part of your brain is conscious, but the bodily-damage-detection part isn’t conscious’ can help draw out people’s implicit models of how consciousness works.
Note that not all of my nonstandard views about suffering and consciousness point in the direction of ‘chickens may be less morally important than humans’. E.g., I’ve written before that I put higher probability than most people on ‘chickens are utility monsters, and we should care much more about an individual chicken than about an individual human’—I think this is a pretty straightforward implication of the ‘consciousness is weird and complicated’ view that leads to a bunch of my other conclusions in the OP.
Parts of the OP were also written years apart, and the original reason I wrote up some of the OP content about suffering wasn’t animal-related at all—rather, I was trying to figure out how much to worry about invisibly suffering subsystems of human brains. (Conclusion: It’s at least as worth-worrying-about as chickens, but it’s less worth-worrying-about than I initially thought.)
Thanks for clarifying. To the extent that you aren’t particularly sure about consciousness comes about, it makes sense to reason about all sorts of possibilities related to capacity for experience and intensity of suffering. In general, I’m just kinda surprised that Eliezer’s view is so unusual given that he is the Eliezer Yudkowsky of the rationalist community.
My impression is that the justification for the argument your mention is something along the lines of “the primary reason one would develop a coherent picture of their own mind is so they could convey a convincing story about themselves to others—which only became a relevant need once language developed.”
I was under the impression you were focused primarily on suffering from the first two sections and the similarity of the above logic to the discussion of pain-signaling earlier. When I think about your generic argument about consciousness, I get confused however. While I can imagine why would one benefit from an internal narrative around their goals, desires, etc, I’m not even sure how I’d go about squaring pressures for that capacity with respect to the many basic sensory qualia that people have (e.g. sense of sight, sense of touch) -- especially in the context of language.
I think things like ‘the ineffable redness of red’ are a side-effect or spandrel. On my account, evolution selected for various kinds of internal cohesion and temporal consistency, introspective accessibility and verbal reportability, moral justifiability and rhetorical compellingness, etc. in weaving together a messy brain into some sort of unified point of view (with an attendant unified personality, unified knowledge, etc.).
This exerted a lot of novel pressures and constrained the solution space a lot, but didn’t constrain it 100%, so you still end up with a lot of weird neither-fitness-improving-nor-fitness-reducing anomalies when you poke at introspection.
This is not a super satisfying response, and it has basically no detail to it, but it’s the least-surprising way I could imagine things shaking out when we have a mature understanding of the mind.