So, you say you have a “preference not to suffer” for everyone, but “preference not to die” only for a few people, if I read it correctly.
When you are asking how someone can have a “preference not to die” for everyone, I think you should also ask how you have a “preference not to suffer” for everyone, because to me it seems rather similar. I mean, the part of “preference not to … for everyone” is the same, so we can ask whether this is realistic, or is just some kind of illusion, to create a better self-image. The difference between wanting someone not to suffer and not to die does not seem so big to me, knowing that many people prefer not to die, and that the idea that they will die causes them suffering.
Another thing is the technical limitation of the human brain. If a death or a suffering of one person causes you some amount of sadness (whether we measure it by neurons firing, or by hormones in blood), of course a death or suffering of million people cannot cause you million times more neuron signals or hormones, because such thing would kill you instantly. The human brain does not have the capacity to multiply this.
But for a transhumanist this is simply a bug in the human brain. What our brains do is not what we want them to do. It is not “what my brain does, is by definition what I think is correct”. We are here to learn about biases and try to fix them. The human brain’s inability to properly multiply emotions is simply yet another such bias. The fact that my brain is unable to care about some things (on the emotional level) does not mean that I don’t. It merely means that currently I don’t have the capacity to feel it on the gut level.
Good points. But I’m thinking that the pain of death is purely because of the loss others feel. So if I could eliminate my entire family and everyone they know (which ends up pulling essentially every person alive into the graph), painlessly and quickly, I’d do it.
The bug of scope insensitivity doesn’t apply if everyone gets wiped out nicely, because then the total suffering is 0. So, for instance, grey goo taking over the world in an hour—that’d cause a spike of suffering, but then levels drop to 0, so I think it’s alright. Whereas an asteroid that kills 90% of people, that’d leave a huge amount of suffering left for the survivors.
In short, the pain of one child dying is the sum of the pain others feel, not an intrinsic to that child dying. So if you shut up and multiply with everyone dying, you get 0. Right?
So, you say you have a “preference not to suffer” for everyone, but “preference not to die” only for a few people, if I read it correctly.
When you are asking how someone can have a “preference not to die” for everyone, I think you should also ask how you have a “preference not to suffer” for everyone, because to me it seems rather similar. I mean, the part of “preference not to … for everyone” is the same, so we can ask whether this is realistic, or is just some kind of illusion, to create a better self-image. The difference between wanting someone not to suffer and not to die does not seem so big to me, knowing that many people prefer not to die, and that the idea that they will die causes them suffering.
Another thing is the technical limitation of the human brain. If a death or a suffering of one person causes you some amount of sadness (whether we measure it by neurons firing, or by hormones in blood), of course a death or suffering of million people cannot cause you million times more neuron signals or hormones, because such thing would kill you instantly. The human brain does not have the capacity to multiply this.
But for a transhumanist this is simply a bug in the human brain. What our brains do is not what we want them to do. It is not “what my brain does, is by definition what I think is correct”. We are here to learn about biases and try to fix them. The human brain’s inability to properly multiply emotions is simply yet another such bias. The fact that my brain is unable to care about some things (on the emotional level) does not mean that I don’t. It merely means that currently I don’t have the capacity to feel it on the gut level.
Good points. But I’m thinking that the pain of death is purely because of the loss others feel. So if I could eliminate my entire family and everyone they know (which ends up pulling essentially every person alive into the graph), painlessly and quickly, I’d do it.
The bug of scope insensitivity doesn’t apply if everyone gets wiped out nicely, because then the total suffering is 0. So, for instance, grey goo taking over the world in an hour—that’d cause a spike of suffering, but then levels drop to 0, so I think it’s alright. Whereas an asteroid that kills 90% of people, that’d leave a huge amount of suffering left for the survivors.
In short, the pain of one child dying is the sum of the pain others feel, not an intrinsic to that child dying. So if you shut up and multiply with everyone dying, you get 0. Right?