In many articles, EY mentions that Death is bad, as if it’s some terminal value. That even the loss of me, is somehow negative for him. Why?
I’ve been thinking that it’s Suffering that should be minimized, in general. Death is only painful for people because of the loss others suffer. Yes, the logical conclusion is that we should completely destroy the universe, in a quick and painless manner. The “painless” part is the catch, of course, and it may be so intractable as to render the entire thought pointless. (That is, we cannot achieve this, so might as well give up and focus on making things better.)
Even outside of Suffering, I still do not see why an arbitrary person is to be valued. Again, EY seems to have this as some terminal value. Why?
I love my children, I love my family, I love some friends. After that, I don’t really care all that much about individuals, except to the extent that I’d prefer them to not suffer. I certainly don’t feel their existence alone is something that valuable, intrinsically.
Am I wicked or something? Am I missing some basic reasoning? I see my viewpoint may be viewed as “negative utilitarian”, but I haven’t come across anything in particular that makes such a position less desirable.
A good portion of LessWrong is unreadable for me as it is based on some kind of altruistic axiom. Personally, I care about myself, my immediate family and a few friends. I will feel a pang of suffering when I see people suffering but I do not feel that pang when I hear about people I don’t know suffering, so I conclude that I don’t care about other people beyond some abstract measure of proximity and their economic utility for me.
So if there were a button you could press that would make one of your close friends happier but would kill someone you haven’t met, you would be totally ok pressing it?
I wouldn’t, but that’s more because of superrationality reasons (if I could sign a contract with everybody else in the world committing to never press such a button, I totally would sign it) than because I don’t really care about my friend that much more than about the stranger.
Oh so many variations to this experiment to test the intution behind my position.
Your version? Depends on how much happier this friend gets. If it is the equivalent to having a cup of coffe I’d just get them that and live on knowing that I am not a murderer. If it is eternal bliss this friend gets, then I wouldn’t do it either as I’d get jealous and had to live with that and the fact that I am a murderer.
I’d be willing to press the button for personal gain though. Not for a cup of coffee, but a higher threshold.
What I’d be willing though is to press a button that prevents a person from being born, as long as that is not one of my potential heirs or of my friends.
I care about (read: have vested interest in) people that can influence my wellbeing and choices. Because all human beings have the potential to do this, I have care about them to some degree, great or small. Because I cannot physically empathize with seven billion humans at once on an equal or appropriate level, I use a general altruistic axiom to determine how to act towards people I do not have the resources to physically care about.
That’s my reason, at least, for having an altruistic axiom, explained in a terribly simple manner. I’m sure there are other, better explainations for working off altruistic axioms. I’m not making a case for the axiom, just explaining what I see as my reasons for having it.
This thing is turning into a tautology. I care about people to the degree that they are useful to me. My friends and family are incredibly useful in the great state of mind they put me in. A person living in extreme poverty I have never met, not so much. They could be useful were they highly educated and had access to sufficient capital to leverage their knowledge complementary to my skills, but the initial investment far exceeds the potential gain.
What irks me is not the statement above but the tradeoff being made in utilitarianism: That the pain of other people should count as much as my pain. It simply does not.
If everyone (or just most people) think like you, then seeing people suffer makes them suffer as well. And that makes their friends suffer, and so on. So, by transitivity, you should expect to suffer at least a little bit when people who you don’t know directly are suffering.
But I don’t think it is about the feeling. I also don’t really feel anything when I hear about some number of people dying in a far away place. Still, I believe that the world would be a better place if people were not dying there. If I am in a position to help people, I believe that in the long run the result is better if I just shut up and multiply and help many far away people, rather than caring mostly about a few friends and neighbors.
If we’d all just cooperate maybe this would be a better world. But we don’t and it is not.
I have yet to see a calculation that shows that my gift to some far away people instead of a fine dinner with my friends will give me a return on my money in the long run. Assume that all people do this to avoid freerider arguments.
You don’t know that he does. You only know that he says he does. Also, MIRI needs your donations!
In all seriousness, it appears that he simply has a much larger circle of empathy than you do. Yours only includes yourself, children, family and friends, which sounds like (what Peter Singer has convincingly argued to be) the default setting that evolution presumably gave you a sense of empathy for because that’d promote the survival of your genes. But that circle can expand, and in fact it has tended to expand over the last couple of millenia. In Eliezer’s case, it appears to include at least all humans. And why? Well, my suspicion is that people have a distaste for contradictions, and any arbitrary limit to empathy is inherently fraught with contradictions. (“Is it okay for a policeman to not care about you because you’re not his friend?” “How many non-friends would you kill to save the life of a friend?” etc.) And maybe maybe Eliezer simply has a greater sensitivity to, and distaste for, contradictions than you do.
What about the seeming preference for existence over non-existence? How do you morally justify keeping people around when there is so much suffering? In the specs versus torture, why not simply erase everyone?
It is if we define a utility function with a strict failure mode for TotalSuffering > 0.
Yeah, but… we don’t.
(Below I’m going to address that case specifically. However, more generally, defining utility functions which assign zero utility to a broad class of possible worlds is a problem, because then you’re indifferent between all of them. Does running around stabbing children seem like a morally neutral act to you, in light of the fact that doing it or not doing it will not have an effect on total utility (because total suffering will remain positive)? If no, that’s not the utility function you want to talk about.)
Anyway, as far as I can tell, you’ve either discovered or reinvented negative utilitarianism. Pretty much no one around here accepts negative utilitarianism, mostly on the grounds of it disagreeing very strongly with moral intuition. (For example, most people would not regard it as a moral act to instantly obliterate Earth and everyone on it.) For me, at least, my objection is that I prefer to live with some suffering than not to live at all—and this would be true even if I was perfectly selfish and didn’t care what effects my death would have on anyone else. So before we can talk usefully about this, I have to ask: leaving aside concerns about the effects of your death on others, would you prefer to die than to live with any amount of suffering?
Thanks for the reply. Yes I found out the term is “negative utilitarianism”. I suppose I can search and find rebuttals of that concept. I didn’t mean that the function was “if suffering > 0 then 0″, just that suffering should be a massively dominating term, so that no possible worlds with real suffering outrank worlds with less suffering.
As to your question about my personal preference on life, it really depends on the level of suffering. At the moment, no, things are alright. But it has not always been that way, and it’s not hard to see it crossing over again.
I would definitely obliterate everyone on Earth, though, and would view not doing so, if capable, to be immoral. Purely because so many sentient creatures are undergoing a terrible existence, and the fact that you and me are having an alright time doesn’t make up for it.
All else being equal, if you have the choice, would you pick (a) your son/daughter immediately ceases to exist, or (b) your son/daughter experiences a very long, joyous life, filled with love and challenge and learning, and yes, some dust specks and suffering, but overall something they would describe as “an awesome time”? (The fact that you might be upset if they ceased to exist is not the point here, so let it be specified that (a) is actually everyone disappearing, which includes your child as a special case, and likewise (b) for everyone, again including your child as a special case.)
If the suffering “rounds down” to 0 for everyone, sure, A is fine. That is, a bit of pain in order to keep Fun. But no hellish levels of suffering for anyone. Otherwise, B. Given how the world currently looks, and MWI, it’s hard to see how it’s possible to end up with everyone having pain that rounds down to 0.
So given the current world and my current understanding, if someone gave me a button to press that’d eliminate earth in a minute or so, I’d press it without hesitation.
Living among billions of happy people who have realistic chances to meet their goals is a world I find much more desirable than a world where my friends and I are the only successful people in existence.
On one hand, there’s the cold utilitarian who only values other lives inasmuch as they further hir goals, and assigns no intrinsic worth to whichever goals they may have for themselves. This position does not coincide, but overlaps, with solipsism. On the other hand, there’s what we could call the naïve Catholic who holds that more life is always better life, no matter in what horrid conditions. This position does not coincide, but overlaps, with panpsychism.
The strong altruistic component of EY’s philosophy is what sets it on a higher moral ground than Ayn Rand’s. For all her support of reason, Rand’s fatal flaw was that she failed to grasp the need for altruism; it was anathema to her, even if her brand of selfishness was strange in that she recognized other people’s right to be selfish too (the popular understanding of selfishness is more predatory than even she allowed).
EY agrees with Rand’s position that every mind should be free to improve itself, but he doesn’t dismiss cooperation. It makes perfect sense: The ferociously competitive realm of natural selection does often select for cooperation, which strongly suggests it’s a useful strategy. I can’t claim to divine his reasons, but the bottom line is that EY gets altruism.
(As chaosmage suggested, it is not impossible that EY merely pretends to be an altruist so people will feel more comfortable letting him talk his way into world domination (ahem, optimization), but the writing style of his texts about the future of humanity and about how much it matters to him is likelier if he really believes what he says.)
Still, the question stands: Why care about random people? I notice it’s difficult for me to verbalize this point because it’s intuitively obvious to me, so much so that my gut activates a red alarm at the sight of a fellow human who doesn’t share that feeling.
Whence empathy? Although empathy has a long tradition of support in many philosophies, antiquity alone is not a valid argument. Warfaring chimpanzees share as much DNA with us as hippie bonobos; mirror neurons are not conclusively proven to exist; and disguised sociopathy sounds like an optimal strategy.
Buddhism has a concept that I find highly appealing. It’s called metta and it basically states that sentient beings’ preference for not suffering is one you can readily agree with because you’re a sentient being too. There are several ways to express the same idea in contemporary terms: We’re all in this together, we’re not so different, and other feel-good platitudes.
We can go one step further and assert this: A world where only some personal sets of preferences get to be realized runs the risk of your preferences being ignored, because there’s no guarantee that you will be the one who decides which preferences are favored; whereas a world where all personal sets of preferences are equally respected is the one where yours have the best chance of being realized. To paraphrase the Toyota ads, what’s good for the entire world is good for you.
(I know most LWers will demand a selfish justification for altruism because any rational decision theory will require it, but I feel hypocritical having to provide a selfish argument for altruism. Ideally, caring for others shouldn’t need to be justified by resorting to an expected personal benefit, but I acknowledge that trying to advance this point is like trying to show a Christian ascetic that hoping to get to heaven by renouncing worldly pleasures is the epitome of calculated hedonism. I still haven’t resolved this contradiction, but fortunately this is the one place in all the Internet where I can feel safe expecting to be proved wrong.)
Another odd thing about Rand’s egoism is that it’s mostly directed towards being able to pursue one’s goal of making excellent things for other people, not being hassled in the process, and being appropriately rewarded.
But he views extinction-level events as “that much worse” than a single death. But is an extinction-level event that bad? If everyone gets wiped out, there’s no suffering left.
I’m not against others being happy and successful, and sure, that’s better than them not being. But I seem to have no preference for anyone existing. Even myself, my kids, my family—if I could, I’d erase the entire lot of us, but it’s just not practical.
Sure. Goal is to make TotalSuffering as small as possible, where each individual Suffering is >= 0. There may be some level of individual Suffering that rounds down to zero, like the pain of hurting your leg while trying to run faster, or stuff like that. The goal is to make sure no one is in real suffering, not eliminate all Fun.
One approach to do that is to make sure everyone is not suffering. That entails a gigantic amount of work. And if I understand MWI, it’s actually impossible, as branches will happen creating a sort of hell. (Only considering forward branches.) Sure, it “all averages out to normal”, but tell that to someone in a hell branch.
The other way is to eliminate all life (or the universe). Suffering is now at 0, an optimal value.
So, you say you have a “preference not to suffer” for everyone, but “preference not to die” only for a few people, if I read it correctly.
When you are asking how someone can have a “preference not to die” for everyone, I think you should also ask how you have a “preference not to suffer” for everyone, because to me it seems rather similar. I mean, the part of “preference not to … for everyone” is the same, so we can ask whether this is realistic, or is just some kind of illusion, to create a better self-image. The difference between wanting someone not to suffer and not to die does not seem so big to me, knowing that many people prefer not to die, and that the idea that they will die causes them suffering.
Another thing is the technical limitation of the human brain. If a death or a suffering of one person causes you some amount of sadness (whether we measure it by neurons firing, or by hormones in blood), of course a death or suffering of million people cannot cause you million times more neuron signals or hormones, because such thing would kill you instantly. The human brain does not have the capacity to multiply this.
But for a transhumanist this is simply a bug in the human brain. What our brains do is not what we want them to do. It is not “what my brain does, is by definition what I think is correct”. We are here to learn about biases and try to fix them. The human brain’s inability to properly multiply emotions is simply yet another such bias. The fact that my brain is unable to care about some things (on the emotional level) does not mean that I don’t. It merely means that currently I don’t have the capacity to feel it on the gut level.
Good points. But I’m thinking that the pain of death is purely because of the loss others feel. So if I could eliminate my entire family and everyone they know (which ends up pulling essentially every person alive into the graph), painlessly and quickly, I’d do it.
The bug of scope insensitivity doesn’t apply if everyone gets wiped out nicely, because then the total suffering is 0. So, for instance, grey goo taking over the world in an hour—that’d cause a spike of suffering, but then levels drop to 0, so I think it’s alright. Whereas an asteroid that kills 90% of people, that’d leave a huge amount of suffering left for the survivors.
In short, the pain of one child dying is the sum of the pain others feel, not an intrinsic to that child dying. So if you shut up and multiply with everyone dying, you get 0. Right?
Why does Eliezer love me?
In many articles, EY mentions that Death is bad, as if it’s some terminal value. That even the loss of me, is somehow negative for him. Why?
I’ve been thinking that it’s Suffering that should be minimized, in general. Death is only painful for people because of the loss others suffer. Yes, the logical conclusion is that we should completely destroy the universe, in a quick and painless manner. The “painless” part is the catch, of course, and it may be so intractable as to render the entire thought pointless. (That is, we cannot achieve this, so might as well give up and focus on making things better.)
Even outside of Suffering, I still do not see why an arbitrary person is to be valued. Again, EY seems to have this as some terminal value. Why?
I love my children, I love my family, I love some friends. After that, I don’t really care all that much about individuals, except to the extent that I’d prefer them to not suffer. I certainly don’t feel their existence alone is something that valuable, intrinsically.
Am I wicked or something? Am I missing some basic reasoning? I see my viewpoint may be viewed as “negative utilitarian”, but I haven’t come across anything in particular that makes such a position less desirable.
A good portion of LessWrong is unreadable for me as it is based on some kind of altruistic axiom. Personally, I care about myself, my immediate family and a few friends. I will feel a pang of suffering when I see people suffering but I do not feel that pang when I hear about people I don’t know suffering, so I conclude that I don’t care about other people beyond some abstract measure of proximity and their economic utility for me.
So if there were a button you could press that would make one of your close friends happier but would kill someone you haven’t met, you would be totally ok pressing it?
I wouldn’t, but that’s more because of superrationality reasons (if I could sign a contract with everybody else in the world committing to never press such a button, I totally would sign it) than because I don’t really care about my friend that much more than about the stranger.
Oh so many variations to this experiment to test the intution behind my position.
Your version? Depends on how much happier this friend gets. If it is the equivalent to having a cup of coffe I’d just get them that and live on knowing that I am not a murderer. If it is eternal bliss this friend gets, then I wouldn’t do it either as I’d get jealous and had to live with that and the fact that I am a murderer.
I’d be willing to press the button for personal gain though. Not for a cup of coffee, but a higher threshold.
What I’d be willing though is to press a button that prevents a person from being born, as long as that is not one of my potential heirs or of my friends.
I care about (read: have vested interest in) people that can influence my wellbeing and choices. Because all human beings have the potential to do this, I have care about them to some degree, great or small. Because I cannot physically empathize with seven billion humans at once on an equal or appropriate level, I use a general altruistic axiom to determine how to act towards people I do not have the resources to physically care about.
That’s my reason, at least, for having an altruistic axiom, explained in a terribly simple manner. I’m sure there are other, better explainations for working off altruistic axioms. I’m not making a case for the axiom, just explaining what I see as my reasons for having it.
This thing is turning into a tautology. I care about people to the degree that they are useful to me. My friends and family are incredibly useful in the great state of mind they put me in. A person living in extreme poverty I have never met, not so much. They could be useful were they highly educated and had access to sufficient capital to leverage their knowledge complementary to my skills, but the initial investment far exceeds the potential gain.
What irks me is not the statement above but the tradeoff being made in utilitarianism: That the pain of other people should count as much as my pain. It simply does not.
If everyone (or just most people) think like you, then seeing people suffer makes them suffer as well. And that makes their friends suffer, and so on. So, by transitivity, you should expect to suffer at least a little bit when people who you don’t know directly are suffering.
But I don’t think it is about the feeling. I also don’t really feel anything when I hear about some number of people dying in a far away place. Still, I believe that the world would be a better place if people were not dying there. If I am in a position to help people, I believe that in the long run the result is better if I just shut up and multiply and help many far away people, rather than caring mostly about a few friends and neighbors.
If we’d all just cooperate maybe this would be a better world. But we don’t and it is not.
I have yet to see a calculation that shows that my gift to some far away people instead of a fine dinner with my friends will give me a return on my money in the long run. Assume that all people do this to avoid freerider arguments.
You don’t know that he does. You only know that he says he does. Also, MIRI needs your donations!
In all seriousness, it appears that he simply has a much larger circle of empathy than you do. Yours only includes yourself, children, family and friends, which sounds like (what Peter Singer has convincingly argued to be) the default setting that evolution presumably gave you a sense of empathy for because that’d promote the survival of your genes. But that circle can expand, and in fact it has tended to expand over the last couple of millenia. In Eliezer’s case, it appears to include at least all humans. And why? Well, my suspicion is that people have a distaste for contradictions, and any arbitrary limit to empathy is inherently fraught with contradictions. (“Is it okay for a policeman to not care about you because you’re not his friend?” “How many non-friends would you kill to save the life of a friend?” etc.) And maybe maybe Eliezer simply has a greater sensitivity to, and distaste for, contradictions than you do.
This is something to think about, thanks.
What about the seeming preference for existence over non-existence? How do you morally justify keeping people around when there is so much suffering? In the specs versus torture, why not simply erase everyone?
People, by and large, appear to favor suffering over suicide. I don’t think it can be ethical to overrule that choice.
They just don’t know how bad suffering gets.
It is if we define a utility function with a strict failure mode for TotalSuffering > 0. Non-existent people don’t really count, do they?
Yeah, but… we don’t.
(Below I’m going to address that case specifically. However, more generally, defining utility functions which assign zero utility to a broad class of possible worlds is a problem, because then you’re indifferent between all of them. Does running around stabbing children seem like a morally neutral act to you, in light of the fact that doing it or not doing it will not have an effect on total utility (because total suffering will remain positive)? If no, that’s not the utility function you want to talk about.)
Anyway, as far as I can tell, you’ve either discovered or reinvented negative utilitarianism. Pretty much no one around here accepts negative utilitarianism, mostly on the grounds of it disagreeing very strongly with moral intuition. (For example, most people would not regard it as a moral act to instantly obliterate Earth and everyone on it.) For me, at least, my objection is that I prefer to live with some suffering than not to live at all—and this would be true even if I was perfectly selfish and didn’t care what effects my death would have on anyone else. So before we can talk usefully about this, I have to ask: leaving aside concerns about the effects of your death on others, would you prefer to die than to live with any amount of suffering?
Thanks for the reply. Yes I found out the term is “negative utilitarianism”. I suppose I can search and find rebuttals of that concept. I didn’t mean that the function was “if suffering > 0 then 0″, just that suffering should be a massively dominating term, so that no possible worlds with real suffering outrank worlds with less suffering.
As to your question about my personal preference on life, it really depends on the level of suffering. At the moment, no, things are alright. But it has not always been that way, and it’s not hard to see it crossing over again.
I would definitely obliterate everyone on Earth, though, and would view not doing so, if capable, to be immoral. Purely because so many sentient creatures are undergoing a terrible existence, and the fact that you and me are having an alright time doesn’t make up for it.
All else being equal, if you have the choice, would you pick (a) your son/daughter immediately ceases to exist, or (b) your son/daughter experiences a very long, joyous life, filled with love and challenge and learning, and yes, some dust specks and suffering, but overall something they would describe as “an awesome time”? (The fact that you might be upset if they ceased to exist is not the point here, so let it be specified that (a) is actually everyone disappearing, which includes your child as a special case, and likewise (b) for everyone, again including your child as a special case.)
If the suffering “rounds down” to 0 for everyone, sure, A is fine. That is, a bit of pain in order to keep Fun. But no hellish levels of suffering for anyone. Otherwise, B. Given how the world currently looks, and MWI, it’s hard to see how it’s possible to end up with everyone having pain that rounds down to 0.
So given the current world and my current understanding, if someone gave me a button to press that’d eliminate earth in a minute or so, I’d press it without hesitation.
Living among billions of happy people who have realistic chances to meet their goals is a world I find much more desirable than a world where my friends and I are the only successful people in existence.
On one hand, there’s the cold utilitarian who only values other lives inasmuch as they further hir goals, and assigns no intrinsic worth to whichever goals they may have for themselves. This position does not coincide, but overlaps, with solipsism. On the other hand, there’s what we could call the naïve Catholic who holds that more life is always better life, no matter in what horrid conditions. This position does not coincide, but overlaps, with panpsychism.
The strong altruistic component of EY’s philosophy is what sets it on a higher moral ground than Ayn Rand’s. For all her support of reason, Rand’s fatal flaw was that she failed to grasp the need for altruism; it was anathema to her, even if her brand of selfishness was strange in that she recognized other people’s right to be selfish too (the popular understanding of selfishness is more predatory than even she allowed).
EY agrees with Rand’s position that every mind should be free to improve itself, but he doesn’t dismiss cooperation. It makes perfect sense: The ferociously competitive realm of natural selection does often select for cooperation, which strongly suggests it’s a useful strategy. I can’t claim to divine his reasons, but the bottom line is that EY gets altruism.
(As chaosmage suggested, it is not impossible that EY merely pretends to be an altruist so people will feel more comfortable letting him talk his way into world domination (ahem, optimization), but the writing style of his texts about the future of humanity and about how much it matters to him is likelier if he really believes what he says.)
Still, the question stands: Why care about random people? I notice it’s difficult for me to verbalize this point because it’s intuitively obvious to me, so much so that my gut activates a red alarm at the sight of a fellow human who doesn’t share that feeling.
Whence empathy? Although empathy has a long tradition of support in many philosophies, antiquity alone is not a valid argument. Warfaring chimpanzees share as much DNA with us as hippie bonobos; mirror neurons are not conclusively proven to exist; and disguised sociopathy sounds like an optimal strategy.
Buddhism has a concept that I find highly appealing. It’s called metta and it basically states that sentient beings’ preference for not suffering is one you can readily agree with because you’re a sentient being too. There are several ways to express the same idea in contemporary terms: We’re all in this together, we’re not so different, and other feel-good platitudes.
We can go one step further and assert this: A world where only some personal sets of preferences get to be realized runs the risk of your preferences being ignored, because there’s no guarantee that you will be the one who decides which preferences are favored; whereas a world where all personal sets of preferences are equally respected is the one where yours have the best chance of being realized. To paraphrase the Toyota ads, what’s good for the entire world is good for you.
(I know most LWers will demand a selfish justification for altruism because any rational decision theory will require it, but I feel hypocritical having to provide a selfish argument for altruism. Ideally, caring for others shouldn’t need to be justified by resorting to an expected personal benefit, but I acknowledge that trying to advance this point is like trying to show a Christian ascetic that hoping to get to heaven by renouncing worldly pleasures is the epitome of calculated hedonism. I still haven’t resolved this contradiction, but fortunately this is the one place in all the Internet where I can feel safe expecting to be proved wrong.)
Another odd thing about Rand’s egoism is that it’s mostly directed towards being able to pursue one’s goal of making excellent things for other people, not being hassled in the process, and being appropriately rewarded.
But he views extinction-level events as “that much worse” than a single death. But is an extinction-level event that bad? If everyone gets wiped out, there’s no suffering left.
I’m not against others being happy and successful, and sure, that’s better than them not being. But I seem to have no preference for anyone existing. Even myself, my kids, my family—if I could, I’d erase the entire lot of us, but it’s just not practical.
Your original post says,
Would you please describe the sequence of thoughts leading to that conclusion?
Sure. Goal is to make TotalSuffering as small as possible, where each individual Suffering is >= 0. There may be some level of individual Suffering that rounds down to zero, like the pain of hurting your leg while trying to run faster, or stuff like that. The goal is to make sure no one is in real suffering, not eliminate all Fun.
One approach to do that is to make sure everyone is not suffering. That entails a gigantic amount of work. And if I understand MWI, it’s actually impossible, as branches will happen creating a sort of hell. (Only considering forward branches.) Sure, it “all averages out to normal”, but tell that to someone in a hell branch.
The other way is to eliminate all life (or the universe). Suffering is now at 0, an optimal value.
So, you say you have a “preference not to suffer” for everyone, but “preference not to die” only for a few people, if I read it correctly.
When you are asking how someone can have a “preference not to die” for everyone, I think you should also ask how you have a “preference not to suffer” for everyone, because to me it seems rather similar. I mean, the part of “preference not to … for everyone” is the same, so we can ask whether this is realistic, or is just some kind of illusion, to create a better self-image. The difference between wanting someone not to suffer and not to die does not seem so big to me, knowing that many people prefer not to die, and that the idea that they will die causes them suffering.
Another thing is the technical limitation of the human brain. If a death or a suffering of one person causes you some amount of sadness (whether we measure it by neurons firing, or by hormones in blood), of course a death or suffering of million people cannot cause you million times more neuron signals or hormones, because such thing would kill you instantly. The human brain does not have the capacity to multiply this.
But for a transhumanist this is simply a bug in the human brain. What our brains do is not what we want them to do. It is not “what my brain does, is by definition what I think is correct”. We are here to learn about biases and try to fix them. The human brain’s inability to properly multiply emotions is simply yet another such bias. The fact that my brain is unable to care about some things (on the emotional level) does not mean that I don’t. It merely means that currently I don’t have the capacity to feel it on the gut level.
Good points. But I’m thinking that the pain of death is purely because of the loss others feel. So if I could eliminate my entire family and everyone they know (which ends up pulling essentially every person alive into the graph), painlessly and quickly, I’d do it.
The bug of scope insensitivity doesn’t apply if everyone gets wiped out nicely, because then the total suffering is 0. So, for instance, grey goo taking over the world in an hour—that’d cause a spike of suffering, but then levels drop to 0, so I think it’s alright. Whereas an asteroid that kills 90% of people, that’d leave a huge amount of suffering left for the survivors.
In short, the pain of one child dying is the sum of the pain others feel, not an intrinsic to that child dying. So if you shut up and multiply with everyone dying, you get 0. Right?