You only included the last sentence of Dawkins’ quote. Here’s the full quote:
The total amount of suffering per year in the natural world is beyond all decent contemplation. During the minute that it takes me to compose this sentence, thousands of animals are being eaten alive, many others are running for their lives, whimpering with fear, others are slowly being devoured from within by rasping parasites, thousands of all kinds are dying of starvation, thirst, and disease. It must be so. If there ever is a time of plenty, this very fact will automatically lead to an increase in the population until the natural state of starvation and misery is restored. In a universe of electrons and selfish genes, blind physical forces and genetic replication, some people are going to get hurt, other people are going to get lucky, and you won’t find any rhyme or reason in it, nor any justice. The universe that we observe has precisely the properties we should expect if there is, at bottom, no design, no purpose, no evil, no good, nothing but pitiless indifference.
The universe is perverse. You have to learn to love it in spite of that.
Right. Materialism tells us that we’re probably going to die and it’s not going be okay; the right way to feel good about it is to do something about it.
Emotions ≠ preferences. It may be that something in the vague category “loving the universe” is (maybe depending on your personality) a winning attitude (or more winning than many people’s existing attitudes) regardless of your morality. (Of course, yes, in changing your attitude you would have to be careful not to delude yourself about your preferences, and most people advocating changing your attitude don’t seem to clearly make the distinction.)
I certainly make that distinction. But it seems to me that “loving” the current wasteland is not an appropriate emotion. Wireheading is wrong not only when/because you stop caring about other things.
But it seems to me that “loving” the current wasteland is not an appropriate emotion.
Granted. It seems to me that the kernel of truth in the original statement is something like “you are not obligated to be depressed that the universe poorly satisfies your preferences”, which (ISTM) some people do need to be told.
Since when has being “good enough” been a prerequisite for loving something (or someone)? In this world, that’s a quick route to a dismal life indeed.
There’s the old saying in the USA: “My country, right or wrong; if right, to be kept right; and if wrong, to be set right.” The sentiment carries just as well, I think, for the universe as a whole. Things as they are may be very wrong indeed, but what does it solve to hate the universe for it? Humans have a long history of loving not what is perfect, but what is broken—the danger lies not in the emotion, but in failing to heal the damage. It may be a crapsack universe out there, but it’s still our sack of crap.
By all means, don’t look away from the tragedies of the world. Figuratively, you can rage at the void and twist the universe to your will, or you can sit the universe down and stage a loving intervention. The main difference between the two, however, is how you feel about the process; the universe, for better or worse, really isn’t going to notice.
Insisting on being unhappy that the universe poorly satisfies your preferences is certainly contrary, if not perverse. Of course, humans greatly value their ability to imagine and desire that the universe be different. This desire might only be perverse if it is impossible to modify the universe to satisfy your preferences. This is the situation that dis-satisfied materialists could find themselves in: a materialistic world is a world that cannot be modified to suit their preferences.
[last paragraph taken out as off-topic and overly speculative]
Emotions ≠ preferences. It seems likely to me that loving the universe is (maybe depending on your personality) a winning attitude (or is more winning than many people’s attitudes) regardless of your morality.
There’s no need to “transform” the universe. The universe is the same if we modify the universe to satisfy our evolved goals, or we modify our goals to be satisfied by the universe. The latter is at least coherent, whereas the former is persisting in the desire to impose a set of values on the universe even after you’ve realized those desires are arbitrary and perhaps not even salvageably self-consistent without modification. What kind of intelligence would be interested in that?
To put it another way, as intelligence increases, we will increasingly modify our goals to what is possible. Given the deterministic nature of the universe, that’s a lot of modification.
The amount of pain in nature is immense. Suffering? I’m not so sure. That’s a technical question, even if we don’t yet know how to ask the right question. A black widow male is certainly in pain as it’s eaten but is very likely not suffering. Many times each day I notice that I have been in pain that I was unaware of. The Continental Philosophy and Women’s Studies traditions concern themselves with suffering that people aren’t aware of, but don’t suggest that such suffering comes in varieties that many animals could plausible experience.
This belief people have that “beings kinda different to me” aren’t suffering strikes me as near-far bias cranked up to 11. Perhaps you don’t notice the pain because it’s relatively minor. I’m assuming you didn’t have your leg chewed off.
This belief people have that “beings kinda different to me” aren’t suffering strikes me as near-far bias cranked up to 11.
In some people, perhaps that is the reasoning; but there really is more to this discussion than anthropocentrism.
Suffering as we experience it is actually a very complicated brain activity, and it’s virtually certain that the real essence of it is in the brain structure rather than the neurotransmitters or other correlates. AFAIK, the full circuitry of the pain center is common to mammals, but not to birds (I could be wrong), fish, or insects. Similar neurotransmitters to ours might be released when a bug finds itself wounded, and its brain might send the impulse to writhe and struggle, but these are not the essence of suffering.
(Similarly, dopamine started out as the trigger for reinforcing connections in very simple brains, as a feedback mechanism for actions that led to success which makes them more likely to execute next time. It’s because of that role that it got co-opted in the vast pleasure/reward/memory complexes in the mammalian brain. So I don’t see the release of dopamine in a 1000-neuron brain to be an indication that pleasure is being experienced there.)
I agree with your points on pain and suffering; more about that on a former Less Wrong post here.
However, reducing the ocean of suffering still leaves you with an ocean. And that suffering is in every sense of the word perverse. If you were constructing a utopia, your first thought would hardly be “well, let’s get these animals fighting and eating each other”. Anyone looking at your design would exclaim: “What kind of perverse utopia is that?! Are you sick?!”. Now, it may be the case that you could give a sophisticated explanation as to why that suffering was necessary, but it doesn’t change the fact that your utopia is perverted. My point is we have to accept the perversion. And denying perversion is simply more perversion.
To specify a particular theory, my guess is that suffering is an evolved elaboration on pain unique to social mammals or possibly shared by social organisms of all sorts. It seems likely to me to basically mediate an exchange of long-term status for help from group members now.
I agree, but I wonder if my confidence in my extrapolation agreeing is greater or less than your confidence in my agreeing was. I tend to claim very much greater than typical agnosticism about the subjective nature of nearby (in an absolute sense) mind-space. I bet a superintelligence could remove my leg without my noticing and I’m curious as to the general layout of the space of ways in which it could remove my leg and have me scream and express horror or agony at my leg’s loss without my noticing.
I really do think that at a best guess, according to my extrapolated values, human suffering outweights that of the rest of the biosphere, most likely by a large ratio (best guess might be between one and two orders of magnitude). Much more importantly, at a best guess, human ‘unachieved but reasonably achievable without superintelligence flourishing’ outweighs the animal analog by many orders of magnitude, and if the two can be put on a common scale I wouldn’t be surprised if the former is a MUCH bigger problem than suffering. I also wouldn’t be shocked if the majority of total suffering in basically Earth-like worlds (and thus the largest source of expected suffering given our epistemic state) comes from something utterly stupid, such as people happening to take up the factory farming of some species which happens, for no particularly good reason, to be freakishly capable of suffering. Sensitivity to long tails tends to be a dominant feature of serious expected utility calculus given my current set of heuristics. The modal dis-value I might put on a pig living its life in a factory farm is under half the median which is under half the mean.
This belief people have that “beings kinda different to me” aren’t suffering strikes me as near-far bias cranked up to 11.
That’s surely a common reason, but are you sure you’re not letting morally loaded annoyance at that phenomenon prejudice you against the proposition?
The cognitive differences between a human and a cow or a spider go far beyond “kinda”, and, AFAIK, nobody really knows what “suffering” (in the sense we assign disutility to) is. Shared confusion creates room for reasonable disagreement over best guesses (though possibly not reasonable disagreement over how confused we are).
It doesn’t take much near-thinking to draw a distinction between “signals to our brain that are indicative of damage inflicted to a body part” on the one hand, and “the realization that major portions of our life plans have to be scrapped in consequence of damaged body parts” on the other. The former only requires a nervous system, the latter requires the sort of nervous system that makes and cares about plans.
Yes, but that assumes this difference is favorable to your hypothesis. David Foster Wallace from “Consider The Lobster”:
Lobsters do not, on the other hand, appear to have the equipment for making or absorbing natural opioids like endorphins and enkephalins, which are what more advanced nervous systems use to try to handle intense pain. From this fact, though, one could conclude either that lobsters are maybe even more vulnerable to pain, since they lack mammalian nervous systems’ built-in analgesia, or, instead, that the absence of natural opioids implies an absence of the really intense pain-sensations that natural opioids are designed to mitigate. I for one can detect a marked upswing in mood as I contemplate this latter possibility...
The entire article is here and that particular passage is here. And later:
Still, after all the abstract intellection, there remain the facts of the frantically clanking lid, the pathetic clinging to the edge of the pot. Standing at the stove, it is hard to deny in any meaningful way that this is a living creature experiencing pain and wishing to avoid/escape the painful experience. To my lay mind, the lobster’s behavior in the kettle appears to be the expression of a preference; and it may well be that an ability to form preferences is the decisive criterion for real suffering.
In this last paragraph (which btw is immediately preceded, in the article, by an observation strikingly similar to mine in the grandparent), I would argue that “frantically” and “pathetic” are projections: the emotions they refer to originate in the viewer’s mind, not in the lobster’s.
We are demonstrably equipped with mental mechanisms whereby we can observe behaviour in others, and as a result of such observations we can experience “ascribed emotions”, which can sometimes take on an intensity not far removed from the sensations that originate in ourselves. That’s where our intuition that the lobster is in pain comes from.
Later in the article, the author argues that lobsters “are known to exhibit preferences”. Well, plants are known to exhibit preferences; they will for instance move so as to face the sun. We do not infer that plants can experience suffering.
We could build a robot today that would sense aspects of its surrounding such as elevated temperature, and we could program that robot to give a higher priority to its “get the hell away from here” program when such conditions obtained. We would then be in a position to observe the robot doing the same thing as the lobster; we would, quite possibly, experience empathy with the robot. But we would not, I think, conclude that it is morally wrong to put the robot in boiling water. We would say that’s a mistake, because we have not built into the robot the degree of personhood which would entitle it to such conclusions.
Trust this community to connect the idea to the reference so quickly. “In Hofstadter we trust” :-)
For those who are not helped by the citation, it turns out that someone thoughtfully posted the relevant quote from the book on their website. I recommend reading it, the story is philosophically interesting and emotionally compelling.
The story was also dramatized in a segment of the movie Victim of the Brain, which is available in its entirety from Google Video. The relevant part begins at around 8:40.
Here is the description of the movie:
1988 docudrama about “the ideas of Douglas Hofstadter”. It was created by Dutch director Piet Hoenderdos. Features interviews with Douglas Hofstadter and Dan Dennett. Dennett also stars as himself. Original acquired from the Center for Research in Concepts and Cognition at Indiana University. Uploaded with permission from Douglas Hofstadter. Uploaded by Virgil Griffith.
That was fascinating. A lot of the point of the story—the implicit claim—was that you’d feel for an entity based on the way its appearance and behavior connected to your sympathy—like crying sounds eliciting pity.
In text that’s not so hard because you can write things like “a shrill noise like a cry of fright” when the simple robot dodges a hammer. The text used to explain the sound are automatically loaded with mental assumptions about “fright”, simply to convey the sound to the reader.
With video the challenge seems like it would be much harder. It becomes more possible that people would feel nothing for some reason. Perhaps for technical reasons of video quality or bad acting, or for reasons more specific to the viewer (desensitized to video violence?), or maybe because the implicit theory about how mind-attribution is elicited is simply false.
Watching it turned out to be interesting on more levels than I’d have thought because I did feel things, but I also noticed the visual tropes that are equivalent to mind laden text… like music playing as the robot (off camera) cries and the camera slowly pans over the wreckage of previously destroyed robots.
Also, I thought it was interesting the way they switched the roles for the naive mysterian and the philosopher of mind, with the mysterian being played by a man and the philosopher being played by a woman… with her hair pinned up, scary eye shadow, and black stockings.
Some Jainists and Buddhists infer that plants can experience suffering. The stricter Jainist diet avoids vegetables that are harvested by killing plants, like carrots and potatoes, in favor of fruits and grains that come voluntarily or from already-dead plants.
I don’t mean to suggest that plants are clearly sentient, just that it’s plausible, even for a human, to have a coherent value system which attempts to avoid the suffering of anything which exhibits preferences.
I’d agree with that sentence if you replaced the word “suffering”, unsuitable because of its complex connotations, with “killing”, which seems adequate to capture the Jainists’ intuitions as represented in the link above.
Although it is relevant to note that the motive may be to avoid suffering—I wasn’t there when the doctrine was formed, and haven’t read the relevant texts, but it is possible that the presence of apparent preferences was interpreted as implying thus.
You only included the last sentence of Dawkins’ quote. Here’s the full quote:
The universe is perverse. You have to learn to love it in spite of that.
What? Why would you love the indifferent universe? It has to be transformed.
Right. Materialism tells us that we’re probably going to die and it’s not going be okay; the right way to feel good about it is to do something about it.
My attitude is easier to transform than the universe’s attitude.
Maybe easier, but is it the right thing to do? Obvious analogy is wireheading. See also: Morality as Fixed Computation.
Emotions ≠ preferences. It may be that something in the vague category “loving the universe” is (maybe depending on your personality) a winning attitude (or more winning than many people’s existing attitudes) regardless of your morality. (Of course, yes, in changing your attitude you would have to be careful not to delude yourself about your preferences, and most people advocating changing your attitude don’t seem to clearly make the distinction.)
I certainly make that distinction. But it seems to me that “loving” the current wasteland is not an appropriate emotion. Wireheading is wrong not only when/because you stop caring about other things.
Granted. It seems to me that the kernel of truth in the original statement is something like “you are not obligated to be depressed that the universe poorly satisfies your preferences”, which (ISTM) some people do need to be told.
Since when has being “good enough” been a prerequisite for loving something (or someone)? In this world, that’s a quick route to a dismal life indeed.
There’s the old saying in the USA: “My country, right or wrong; if right, to be kept right; and if wrong, to be set right.” The sentiment carries just as well, I think, for the universe as a whole. Things as they are may be very wrong indeed, but what does it solve to hate the universe for it? Humans have a long history of loving not what is perfect, but what is broken—the danger lies not in the emotion, but in failing to heal the damage. It may be a crapsack universe out there, but it’s still our sack of crap.
By all means, don’t look away from the tragedies of the world. Figuratively, you can rage at the void and twist the universe to your will, or you can sit the universe down and stage a loving intervention. The main difference between the two, however, is how you feel about the process; the universe, for better or worse, really isn’t going to notice.
Insisting on being unhappy that the universe poorly satisfies your preferences is certainly contrary, if not perverse. Of course, humans greatly value their ability to imagine and desire that the universe be different. This desire might only be perverse if it is impossible to modify the universe to satisfy your preferences. This is the situation that dis-satisfied materialists could find themselves in: a materialistic world is a world that cannot be modified to suit their preferences.
[last paragraph taken out as off-topic and overly speculative]
Emotions ≠ preferences. It seems likely to me that loving the universe is (maybe depending on your personality) a winning attitude (or is more winning than many people’s attitudes) regardless of your morality.
There’s no need to “transform” the universe. The universe is the same if we modify the universe to satisfy our evolved goals, or we modify our goals to be satisfied by the universe. The latter is at least coherent, whereas the former is persisting in the desire to impose a set of values on the universe even after you’ve realized those desires are arbitrary and perhaps not even salvageably self-consistent without modification. What kind of intelligence would be interested in that?
To put it another way, as intelligence increases, we will increasingly modify our goals to what is possible. Given the deterministic nature of the universe, that’s a lot of modification.
A lot more is possible than what is currently present. You don’t need to modify unreachable programming, it just doesn’t run (until it does).
I heard lobotomy is an excellent way to do that.
The amount of pain in nature is immense. Suffering? I’m not so sure. That’s a technical question, even if we don’t yet know how to ask the right question. A black widow male is certainly in pain as it’s eaten but is very likely not suffering. Many times each day I notice that I have been in pain that I was unaware of. The Continental Philosophy and Women’s Studies traditions concern themselves with suffering that people aren’t aware of, but don’t suggest that such suffering comes in varieties that many animals could plausible experience.
This belief people have that “beings kinda different to me” aren’t suffering strikes me as near-far bias cranked up to 11. Perhaps you don’t notice the pain because it’s relatively minor. I’m assuming you didn’t have your leg chewed off.
In some people, perhaps that is the reasoning; but there really is more to this discussion than anthropocentrism.
Suffering as we experience it is actually a very complicated brain activity, and it’s virtually certain that the real essence of it is in the brain structure rather than the neurotransmitters or other correlates. AFAIK, the full circuitry of the pain center is common to mammals, but not to birds (I could be wrong), fish, or insects. Similar neurotransmitters to ours might be released when a bug finds itself wounded, and its brain might send the impulse to writhe and struggle, but these are not the essence of suffering.
(Similarly, dopamine started out as the trigger for reinforcing connections in very simple brains, as a feedback mechanism for actions that led to success which makes them more likely to execute next time. It’s because of that role that it got co-opted in the vast pleasure/reward/memory complexes in the mammalian brain. So I don’t see the release of dopamine in a 1000-neuron brain to be an indication that pleasure is being experienced there.)
I agree with your points on pain and suffering; more about that on a former Less Wrong post here.
However, reducing the ocean of suffering still leaves you with an ocean. And that suffering is in every sense of the word perverse. If you were constructing a utopia, your first thought would hardly be “well, let’s get these animals fighting and eating each other”. Anyone looking at your design would exclaim: “What kind of perverse utopia is that?! Are you sick?!”. Now, it may be the case that you could give a sophisticated explanation as to why that suffering was necessary, but it doesn’t change the fact that your utopia is perverted. My point is we have to accept the perversion. And denying perversion is simply more perversion.
To specify a particular theory, my guess is that suffering is an evolved elaboration on pain unique to social mammals or possibly shared by social organisms of all sorts. It seems likely to me to basically mediate an exchange of long-term status for help from group members now.
Perhaps: pain is near-mode; suffering is far-mode. Scenario: my leg is getting chewed off.
Near-mode thinking: direct all attention to attempt to remove the immediate source of pain / fight or flight / (instinctive) scream for attention
Far-mode thinking: reevaluate the longer-term life and social consequences of having my leg chewed off / dwell on the problem in the abstract
I agree with this point, and I’d bet karma at better than even odds that so does Michael Vassar.
I agree, but I wonder if my confidence in my extrapolation agreeing is greater or less than your confidence in my agreeing was. I tend to claim very much greater than typical agnosticism about the subjective nature of nearby (in an absolute sense) mind-space. I bet a superintelligence could remove my leg without my noticing and I’m curious as to the general layout of the space of ways in which it could remove my leg and have me scream and express horror or agony at my leg’s loss without my noticing.
I really do think that at a best guess, according to my extrapolated values, human suffering outweights that of the rest of the biosphere, most likely by a large ratio (best guess might be between one and two orders of magnitude). Much more importantly, at a best guess, human ‘unachieved but reasonably achievable without superintelligence flourishing’ outweighs the animal analog by many orders of magnitude, and if the two can be put on a common scale I wouldn’t be surprised if the former is a MUCH bigger problem than suffering. I also wouldn’t be shocked if the majority of total suffering in basically Earth-like worlds (and thus the largest source of expected suffering given our epistemic state) comes from something utterly stupid, such as people happening to take up the factory farming of some species which happens, for no particularly good reason, to be freakishly capable of suffering. Sensitivity to long tails tends to be a dominant feature of serious expected utility calculus given my current set of heuristics. The modal dis-value I might put on a pig living its life in a factory farm is under half the median which is under half the mean.
That’s surely a common reason, but are you sure you’re not letting morally loaded annoyance at that phenomenon prejudice you against the proposition?
The cognitive differences between a human and a cow or a spider go far beyond “kinda”, and, AFAIK, nobody really knows what “suffering” (in the sense we assign disutility to) is. Shared confusion creates room for reasonable disagreement over best guesses (though possibly not reasonable disagreement over how confused we are).
(See also.)
It doesn’t take much near-thinking to draw a distinction between “signals to our brain that are indicative of damage inflicted to a body part” on the one hand, and “the realization that major portions of our life plans have to be scrapped in consequence of damaged body parts” on the other. The former only requires a nervous system, the latter requires the sort of nervous system that makes and cares about plans.
Yes, but that assumes this difference is favorable to your hypothesis. David Foster Wallace from “Consider The Lobster”:
The entire article is here and that particular passage is here. And later:
In this last paragraph (which btw is immediately preceded, in the article, by an observation strikingly similar to mine in the grandparent), I would argue that “frantically” and “pathetic” are projections: the emotions they refer to originate in the viewer’s mind, not in the lobster’s.
We are demonstrably equipped with mental mechanisms whereby we can observe behaviour in others, and as a result of such observations we can experience “ascribed emotions”, which can sometimes take on an intensity not far removed from the sensations that originate in ourselves. That’s where our intuition that the lobster is in pain comes from.
Later in the article, the author argues that lobsters “are known to exhibit preferences”. Well, plants are known to exhibit preferences; they will for instance move so as to face the sun. We do not infer that plants can experience suffering.
We could build a robot today that would sense aspects of its surrounding such as elevated temperature, and we could program that robot to give a higher priority to its “get the hell away from here” program when such conditions obtained. We would then be in a position to observe the robot doing the same thing as the lobster; we would, quite possibly, experience empathy with the robot. But we would not, I think, conclude that it is morally wrong to put the robot in boiling water. We would say that’s a mistake, because we have not built into the robot the degree of personhood which would entitle it to such conclusions.
cf. “The Soul of the Mark III Beast”, Terrel Miedaner, included in The Mind’s I, Dennett & Hofstadter.
Trust this community to connect the idea to the reference so quickly. “In Hofstadter we trust” :-)
For those who are not helped by the citation, it turns out that someone thoughtfully posted the relevant quote from the book on their website. I recommend reading it, the story is philosophically interesting and emotionally compelling.
The story was also dramatized in a segment of the movie Victim of the Brain, which is available in its entirety from Google Video. The relevant part begins at around 8:40.
Here is the description of the movie:
That was fascinating. A lot of the point of the story—the implicit claim—was that you’d feel for an entity based on the way its appearance and behavior connected to your sympathy—like crying sounds eliciting pity.
In text that’s not so hard because you can write things like “a shrill noise like a cry of fright” when the simple robot dodges a hammer. The text used to explain the sound are automatically loaded with mental assumptions about “fright”, simply to convey the sound to the reader.
With video the challenge seems like it would be much harder. It becomes more possible that people would feel nothing for some reason. Perhaps for technical reasons of video quality or bad acting, or for reasons more specific to the viewer (desensitized to video violence?), or maybe because the implicit theory about how mind-attribution is elicited is simply false.
Watching it turned out to be interesting on more levels than I’d have thought because I did feel things, but I also noticed the visual tropes that are equivalent to mind laden text… like music playing as the robot (off camera) cries and the camera slowly pans over the wreckage of previously destroyed robots.
Also, I thought it was interesting the way they switched the roles for the naive mysterian and the philosopher of mind, with the mysterian being played by a man and the philosopher being played by a woman… with her hair pinned up, scary eye shadow, and black stockings.
“She’s a witch! Burn her!”
Some Jainists and Buddhists infer that plants can experience suffering. The stricter Jainist diet avoids vegetables that are harvested by killing plants, like carrots and potatoes, in favor of fruits and grains that come voluntarily or from already-dead plants.
That’s a preference of theirs; fine by me, but not obviously evidence-based.
I don’t mean to suggest that plants are clearly sentient, just that it’s plausible, even for a human, to have a coherent value system which attempts to avoid the suffering of anything which exhibits preferences.
I’d agree with that sentence if you replaced the word “suffering”, unsuitable because of its complex connotations, with “killing”, which seems adequate to capture the Jainists’ intuitions as represented in the link above.
Although it is relevant to note that the motive may be to avoid suffering—I wasn’t there when the doctrine was formed, and haven’t read the relevant texts, but it is possible that the presence of apparent preferences was interpreted as implying thus.