This belief people have that “beings kinda different to me” aren’t suffering strikes me as near-far bias cranked up to 11. Perhaps you don’t notice the pain because it’s relatively minor. I’m assuming you didn’t have your leg chewed off.
This belief people have that “beings kinda different to me” aren’t suffering strikes me as near-far bias cranked up to 11.
In some people, perhaps that is the reasoning; but there really is more to this discussion than anthropocentrism.
Suffering as we experience it is actually a very complicated brain activity, and it’s virtually certain that the real essence of it is in the brain structure rather than the neurotransmitters or other correlates. AFAIK, the full circuitry of the pain center is common to mammals, but not to birds (I could be wrong), fish, or insects. Similar neurotransmitters to ours might be released when a bug finds itself wounded, and its brain might send the impulse to writhe and struggle, but these are not the essence of suffering.
(Similarly, dopamine started out as the trigger for reinforcing connections in very simple brains, as a feedback mechanism for actions that led to success which makes them more likely to execute next time. It’s because of that role that it got co-opted in the vast pleasure/reward/memory complexes in the mammalian brain. So I don’t see the release of dopamine in a 1000-neuron brain to be an indication that pleasure is being experienced there.)
I agree with your points on pain and suffering; more about that on a former Less Wrong post here.
However, reducing the ocean of suffering still leaves you with an ocean. And that suffering is in every sense of the word perverse. If you were constructing a utopia, your first thought would hardly be “well, let’s get these animals fighting and eating each other”. Anyone looking at your design would exclaim: “What kind of perverse utopia is that?! Are you sick?!”. Now, it may be the case that you could give a sophisticated explanation as to why that suffering was necessary, but it doesn’t change the fact that your utopia is perverted. My point is we have to accept the perversion. And denying perversion is simply more perversion.
To specify a particular theory, my guess is that suffering is an evolved elaboration on pain unique to social mammals or possibly shared by social organisms of all sorts. It seems likely to me to basically mediate an exchange of long-term status for help from group members now.
I agree, but I wonder if my confidence in my extrapolation agreeing is greater or less than your confidence in my agreeing was. I tend to claim very much greater than typical agnosticism about the subjective nature of nearby (in an absolute sense) mind-space. I bet a superintelligence could remove my leg without my noticing and I’m curious as to the general layout of the space of ways in which it could remove my leg and have me scream and express horror or agony at my leg’s loss without my noticing.
I really do think that at a best guess, according to my extrapolated values, human suffering outweights that of the rest of the biosphere, most likely by a large ratio (best guess might be between one and two orders of magnitude). Much more importantly, at a best guess, human ‘unachieved but reasonably achievable without superintelligence flourishing’ outweighs the animal analog by many orders of magnitude, and if the two can be put on a common scale I wouldn’t be surprised if the former is a MUCH bigger problem than suffering. I also wouldn’t be shocked if the majority of total suffering in basically Earth-like worlds (and thus the largest source of expected suffering given our epistemic state) comes from something utterly stupid, such as people happening to take up the factory farming of some species which happens, for no particularly good reason, to be freakishly capable of suffering. Sensitivity to long tails tends to be a dominant feature of serious expected utility calculus given my current set of heuristics. The modal dis-value I might put on a pig living its life in a factory farm is under half the median which is under half the mean.
This belief people have that “beings kinda different to me” aren’t suffering strikes me as near-far bias cranked up to 11.
That’s surely a common reason, but are you sure you’re not letting morally loaded annoyance at that phenomenon prejudice you against the proposition?
The cognitive differences between a human and a cow or a spider go far beyond “kinda”, and, AFAIK, nobody really knows what “suffering” (in the sense we assign disutility to) is. Shared confusion creates room for reasonable disagreement over best guesses (though possibly not reasonable disagreement over how confused we are).
It doesn’t take much near-thinking to draw a distinction between “signals to our brain that are indicative of damage inflicted to a body part” on the one hand, and “the realization that major portions of our life plans have to be scrapped in consequence of damaged body parts” on the other. The former only requires a nervous system, the latter requires the sort of nervous system that makes and cares about plans.
Yes, but that assumes this difference is favorable to your hypothesis. David Foster Wallace from “Consider The Lobster”:
Lobsters do not, on the other hand, appear to have the equipment for making or absorbing natural opioids like endorphins and enkephalins, which are what more advanced nervous systems use to try to handle intense pain. From this fact, though, one could conclude either that lobsters are maybe even more vulnerable to pain, since they lack mammalian nervous systems’ built-in analgesia, or, instead, that the absence of natural opioids implies an absence of the really intense pain-sensations that natural opioids are designed to mitigate. I for one can detect a marked upswing in mood as I contemplate this latter possibility...
The entire article is here and that particular passage is here. And later:
Still, after all the abstract intellection, there remain the facts of the frantically clanking lid, the pathetic clinging to the edge of the pot. Standing at the stove, it is hard to deny in any meaningful way that this is a living creature experiencing pain and wishing to avoid/escape the painful experience. To my lay mind, the lobster’s behavior in the kettle appears to be the expression of a preference; and it may well be that an ability to form preferences is the decisive criterion for real suffering.
In this last paragraph (which btw is immediately preceded, in the article, by an observation strikingly similar to mine in the grandparent), I would argue that “frantically” and “pathetic” are projections: the emotions they refer to originate in the viewer’s mind, not in the lobster’s.
We are demonstrably equipped with mental mechanisms whereby we can observe behaviour in others, and as a result of such observations we can experience “ascribed emotions”, which can sometimes take on an intensity not far removed from the sensations that originate in ourselves. That’s where our intuition that the lobster is in pain comes from.
Later in the article, the author argues that lobsters “are known to exhibit preferences”. Well, plants are known to exhibit preferences; they will for instance move so as to face the sun. We do not infer that plants can experience suffering.
We could build a robot today that would sense aspects of its surrounding such as elevated temperature, and we could program that robot to give a higher priority to its “get the hell away from here” program when such conditions obtained. We would then be in a position to observe the robot doing the same thing as the lobster; we would, quite possibly, experience empathy with the robot. But we would not, I think, conclude that it is morally wrong to put the robot in boiling water. We would say that’s a mistake, because we have not built into the robot the degree of personhood which would entitle it to such conclusions.
Trust this community to connect the idea to the reference so quickly. “In Hofstadter we trust” :-)
For those who are not helped by the citation, it turns out that someone thoughtfully posted the relevant quote from the book on their website. I recommend reading it, the story is philosophically interesting and emotionally compelling.
The story was also dramatized in a segment of the movie Victim of the Brain, which is available in its entirety from Google Video. The relevant part begins at around 8:40.
Here is the description of the movie:
1988 docudrama about “the ideas of Douglas Hofstadter”. It was created by Dutch director Piet Hoenderdos. Features interviews with Douglas Hofstadter and Dan Dennett. Dennett also stars as himself. Original acquired from the Center for Research in Concepts and Cognition at Indiana University. Uploaded with permission from Douglas Hofstadter. Uploaded by Virgil Griffith.
That was fascinating. A lot of the point of the story—the implicit claim—was that you’d feel for an entity based on the way its appearance and behavior connected to your sympathy—like crying sounds eliciting pity.
In text that’s not so hard because you can write things like “a shrill noise like a cry of fright” when the simple robot dodges a hammer. The text used to explain the sound are automatically loaded with mental assumptions about “fright”, simply to convey the sound to the reader.
With video the challenge seems like it would be much harder. It becomes more possible that people would feel nothing for some reason. Perhaps for technical reasons of video quality or bad acting, or for reasons more specific to the viewer (desensitized to video violence?), or maybe because the implicit theory about how mind-attribution is elicited is simply false.
Watching it turned out to be interesting on more levels than I’d have thought because I did feel things, but I also noticed the visual tropes that are equivalent to mind laden text… like music playing as the robot (off camera) cries and the camera slowly pans over the wreckage of previously destroyed robots.
Also, I thought it was interesting the way they switched the roles for the naive mysterian and the philosopher of mind, with the mysterian being played by a man and the philosopher being played by a woman… with her hair pinned up, scary eye shadow, and black stockings.
Some Jainists and Buddhists infer that plants can experience suffering. The stricter Jainist diet avoids vegetables that are harvested by killing plants, like carrots and potatoes, in favor of fruits and grains that come voluntarily or from already-dead plants.
I don’t mean to suggest that plants are clearly sentient, just that it’s plausible, even for a human, to have a coherent value system which attempts to avoid the suffering of anything which exhibits preferences.
I’d agree with that sentence if you replaced the word “suffering”, unsuitable because of its complex connotations, with “killing”, which seems adequate to capture the Jainists’ intuitions as represented in the link above.
Although it is relevant to note that the motive may be to avoid suffering—I wasn’t there when the doctrine was formed, and haven’t read the relevant texts, but it is possible that the presence of apparent preferences was interpreted as implying thus.
This belief people have that “beings kinda different to me” aren’t suffering strikes me as near-far bias cranked up to 11. Perhaps you don’t notice the pain because it’s relatively minor. I’m assuming you didn’t have your leg chewed off.
In some people, perhaps that is the reasoning; but there really is more to this discussion than anthropocentrism.
Suffering as we experience it is actually a very complicated brain activity, and it’s virtually certain that the real essence of it is in the brain structure rather than the neurotransmitters or other correlates. AFAIK, the full circuitry of the pain center is common to mammals, but not to birds (I could be wrong), fish, or insects. Similar neurotransmitters to ours might be released when a bug finds itself wounded, and its brain might send the impulse to writhe and struggle, but these are not the essence of suffering.
(Similarly, dopamine started out as the trigger for reinforcing connections in very simple brains, as a feedback mechanism for actions that led to success which makes them more likely to execute next time. It’s because of that role that it got co-opted in the vast pleasure/reward/memory complexes in the mammalian brain. So I don’t see the release of dopamine in a 1000-neuron brain to be an indication that pleasure is being experienced there.)
I agree with your points on pain and suffering; more about that on a former Less Wrong post here.
However, reducing the ocean of suffering still leaves you with an ocean. And that suffering is in every sense of the word perverse. If you were constructing a utopia, your first thought would hardly be “well, let’s get these animals fighting and eating each other”. Anyone looking at your design would exclaim: “What kind of perverse utopia is that?! Are you sick?!”. Now, it may be the case that you could give a sophisticated explanation as to why that suffering was necessary, but it doesn’t change the fact that your utopia is perverted. My point is we have to accept the perversion. And denying perversion is simply more perversion.
To specify a particular theory, my guess is that suffering is an evolved elaboration on pain unique to social mammals or possibly shared by social organisms of all sorts. It seems likely to me to basically mediate an exchange of long-term status for help from group members now.
Perhaps: pain is near-mode; suffering is far-mode. Scenario: my leg is getting chewed off.
Near-mode thinking: direct all attention to attempt to remove the immediate source of pain / fight or flight / (instinctive) scream for attention
Far-mode thinking: reevaluate the longer-term life and social consequences of having my leg chewed off / dwell on the problem in the abstract
I agree with this point, and I’d bet karma at better than even odds that so does Michael Vassar.
I agree, but I wonder if my confidence in my extrapolation agreeing is greater or less than your confidence in my agreeing was. I tend to claim very much greater than typical agnosticism about the subjective nature of nearby (in an absolute sense) mind-space. I bet a superintelligence could remove my leg without my noticing and I’m curious as to the general layout of the space of ways in which it could remove my leg and have me scream and express horror or agony at my leg’s loss without my noticing.
I really do think that at a best guess, according to my extrapolated values, human suffering outweights that of the rest of the biosphere, most likely by a large ratio (best guess might be between one and two orders of magnitude). Much more importantly, at a best guess, human ‘unachieved but reasonably achievable without superintelligence flourishing’ outweighs the animal analog by many orders of magnitude, and if the two can be put on a common scale I wouldn’t be surprised if the former is a MUCH bigger problem than suffering. I also wouldn’t be shocked if the majority of total suffering in basically Earth-like worlds (and thus the largest source of expected suffering given our epistemic state) comes from something utterly stupid, such as people happening to take up the factory farming of some species which happens, for no particularly good reason, to be freakishly capable of suffering. Sensitivity to long tails tends to be a dominant feature of serious expected utility calculus given my current set of heuristics. The modal dis-value I might put on a pig living its life in a factory farm is under half the median which is under half the mean.
That’s surely a common reason, but are you sure you’re not letting morally loaded annoyance at that phenomenon prejudice you against the proposition?
The cognitive differences between a human and a cow or a spider go far beyond “kinda”, and, AFAIK, nobody really knows what “suffering” (in the sense we assign disutility to) is. Shared confusion creates room for reasonable disagreement over best guesses (though possibly not reasonable disagreement over how confused we are).
(See also.)
It doesn’t take much near-thinking to draw a distinction between “signals to our brain that are indicative of damage inflicted to a body part” on the one hand, and “the realization that major portions of our life plans have to be scrapped in consequence of damaged body parts” on the other. The former only requires a nervous system, the latter requires the sort of nervous system that makes and cares about plans.
Yes, but that assumes this difference is favorable to your hypothesis. David Foster Wallace from “Consider The Lobster”:
The entire article is here and that particular passage is here. And later:
In this last paragraph (which btw is immediately preceded, in the article, by an observation strikingly similar to mine in the grandparent), I would argue that “frantically” and “pathetic” are projections: the emotions they refer to originate in the viewer’s mind, not in the lobster’s.
We are demonstrably equipped with mental mechanisms whereby we can observe behaviour in others, and as a result of such observations we can experience “ascribed emotions”, which can sometimes take on an intensity not far removed from the sensations that originate in ourselves. That’s where our intuition that the lobster is in pain comes from.
Later in the article, the author argues that lobsters “are known to exhibit preferences”. Well, plants are known to exhibit preferences; they will for instance move so as to face the sun. We do not infer that plants can experience suffering.
We could build a robot today that would sense aspects of its surrounding such as elevated temperature, and we could program that robot to give a higher priority to its “get the hell away from here” program when such conditions obtained. We would then be in a position to observe the robot doing the same thing as the lobster; we would, quite possibly, experience empathy with the robot. But we would not, I think, conclude that it is morally wrong to put the robot in boiling water. We would say that’s a mistake, because we have not built into the robot the degree of personhood which would entitle it to such conclusions.
cf. “The Soul of the Mark III Beast”, Terrel Miedaner, included in The Mind’s I, Dennett & Hofstadter.
Trust this community to connect the idea to the reference so quickly. “In Hofstadter we trust” :-)
For those who are not helped by the citation, it turns out that someone thoughtfully posted the relevant quote from the book on their website. I recommend reading it, the story is philosophically interesting and emotionally compelling.
The story was also dramatized in a segment of the movie Victim of the Brain, which is available in its entirety from Google Video. The relevant part begins at around 8:40.
Here is the description of the movie:
That was fascinating. A lot of the point of the story—the implicit claim—was that you’d feel for an entity based on the way its appearance and behavior connected to your sympathy—like crying sounds eliciting pity.
In text that’s not so hard because you can write things like “a shrill noise like a cry of fright” when the simple robot dodges a hammer. The text used to explain the sound are automatically loaded with mental assumptions about “fright”, simply to convey the sound to the reader.
With video the challenge seems like it would be much harder. It becomes more possible that people would feel nothing for some reason. Perhaps for technical reasons of video quality or bad acting, or for reasons more specific to the viewer (desensitized to video violence?), or maybe because the implicit theory about how mind-attribution is elicited is simply false.
Watching it turned out to be interesting on more levels than I’d have thought because I did feel things, but I also noticed the visual tropes that are equivalent to mind laden text… like music playing as the robot (off camera) cries and the camera slowly pans over the wreckage of previously destroyed robots.
Also, I thought it was interesting the way they switched the roles for the naive mysterian and the philosopher of mind, with the mysterian being played by a man and the philosopher being played by a woman… with her hair pinned up, scary eye shadow, and black stockings.
“She’s a witch! Burn her!”
Some Jainists and Buddhists infer that plants can experience suffering. The stricter Jainist diet avoids vegetables that are harvested by killing plants, like carrots and potatoes, in favor of fruits and grains that come voluntarily or from already-dead plants.
That’s a preference of theirs; fine by me, but not obviously evidence-based.
I don’t mean to suggest that plants are clearly sentient, just that it’s plausible, even for a human, to have a coherent value system which attempts to avoid the suffering of anything which exhibits preferences.
I’d agree with that sentence if you replaced the word “suffering”, unsuitable because of its complex connotations, with “killing”, which seems adequate to capture the Jainists’ intuitions as represented in the link above.
Although it is relevant to note that the motive may be to avoid suffering—I wasn’t there when the doctrine was formed, and haven’t read the relevant texts, but it is possible that the presence of apparent preferences was interpreted as implying thus.