I’m not sure that I can’t generalize the experience of empathy to apply to people whose faces I can’t see. They don’t have to be real people, they can be stand-ins. I can picture someone terrified, in desperate need, and empathize. I know that there are and will be billions of people who experience the same thing. Now I can’t succeed in empathizing with these people per se, I don’t know who they are and even if I did there would be too many. But I can form some idea of what it would be like to stare 1,000,000,000 scared children in the eyes and tell them that they have to die because I love my family and friends more than them. Imagine doing that to one child and then doing it 999,999,999 more times. Thats how I try to emotionally represent the survival of the human race.
The fact that you never will have to experience this doesn’t mean those children won’t experience the fear. Now you can’t make actual decisions like this (weighing the experiences of inflicting both sets of pain yourself) because if they’re big decisions thinking like this will paralyze you with despair and grief. You will get sick to your stomach. But the emotional facts should still be in the back of your mind motivating your decisions and you should come up with ways to represent mass suffering so that you can calculate with it without having to always empathize with it. You need this kind of empathy when constructing your utility function, it just can’t actually be in your utility function.
Getting back to the original issue: since protecting humanity isn’t necessarily driven by the amygdala and suchlike instincts, and requires all the logic & rationalization above to defend, why do you value it?
From your explanation I gather that you first decided it’s a good value to have, and then constructed an emotional justification to make it easier for you to have that value. But where does it come from? (Remember that as far as your subconscious is concerned, it’s just a nice value to signal, since I presume you’ve never had to act on it—far mode thinking, if I remember the term correctly).
Extending empathy to those whom I can’t actually see just seems like the obvious thing to do since the fact that I can’t see their faces doesn’t appear to me to be a morally relevant feature of my situation and I know that if I could see them I would empathize.
So I’m not constructing an emotional justification post hoc so much as thinking about why anyone matters to me and then applying those reasons consistently.
Good to know you’re not a psychopath, anyway. :-)
I’m not sure that I can’t generalize the experience of empathy to apply to people whose faces I can’t see. They don’t have to be real people, they can be stand-ins. I can picture someone terrified, in desperate need, and empathize. I know that there are and will be billions of people who experience the same thing. Now I can’t succeed in empathizing with these people per se, I don’t know who they are and even if I did there would be too many. But I can form some idea of what it would be like to stare 1,000,000,000 scared children in the eyes and tell them that they have to die because I love my family and friends more than them. Imagine doing that to one child and then doing it 999,999,999 more times. Thats how I try to emotionally represent the survival of the human race.
The fact that you never will have to experience this doesn’t mean those children won’t experience the fear. Now you can’t make actual decisions like this (weighing the experiences of inflicting both sets of pain yourself) because if they’re big decisions thinking like this will paralyze you with despair and grief. You will get sick to your stomach. But the emotional facts should still be in the back of your mind motivating your decisions and you should come up with ways to represent mass suffering so that you can calculate with it without having to always empathize with it. You need this kind of empathy when constructing your utility function, it just can’t actually be in your utility function.
Getting back to the original issue: since protecting humanity isn’t necessarily driven by the amygdala and suchlike instincts, and requires all the logic & rationalization above to defend, why do you value it?
From your explanation I gather that you first decided it’s a good value to have, and then constructed an emotional justification to make it easier for you to have that value. But where does it come from? (Remember that as far as your subconscious is concerned, it’s just a nice value to signal, since I presume you’ve never had to act on it—far mode thinking, if I remember the term correctly).
Extending empathy to those whom I can’t actually see just seems like the obvious thing to do since the fact that I can’t see their faces doesn’t appear to me to be a morally relevant feature of my situation and I know that if I could see them I would empathize.
So I’m not constructing an emotional justification post hoc so much as thinking about why anyone matters to me and then applying those reasons consistently.