Empathy has serious shortcomings… compared to what?
For example, I consider it a feature that we are more likely to emphatize with children and puppies than cockroaches just because they are cute and more similar to us. Or what about rocks? (who said that rocks don’t have feelings? they are just… different. We bastards constantly fail to understand them.)
As there is no Universal Ethics of the Universe, the only thing we can compare our own implementation of morality is our (probably incomplete) models of it, then we do the evaluation… again using our built-in system on those corner cases, declaring that we’ve found a bug… (and in this case, it seems to be wise to think twice before fixing them in order to build an Even More Friendly AI....)
Nevertheless, I agree that simulating the feelings of others is not all there is to morality. Consider Schelling points, that (in my view) nicely explain “fairness” without a little bit of feeling for your enemies. Or… of course, it is a little mixed up (empathy as an implementation of Schelling points?)...
Since when have neurologists studied rocks? The whimsical suggestion that rocks might have feelings is somewhat akin to the less whimsical suggestion that there are lots of things that may have ‘feelings’ that we do not easily or usually detect, or can not detect without special equipment.
And some of these feelings (like bio-communication in plants), while measurable, we usually don’t care that much about, and empathy for the pain of plants (and animals) may interfere with empathy for the pain of people, if you take compassion fatigue into consideration.
And some of these feelings (like bio-communication in plants), while measurable, we usually don’t care that much about[...]
I’d argue that bio-communication in plants is probably not any actual kind of feeling or qualia. This is what I meant when I referred to neurologists; currently science has a fair (though vague) idea as to what sort of structure enables human feelings and experiences. When we don’t detect this structure or anything analogous to it, we can be pretty confident that there’s no consciousness/qualia/nameless-redness/etc. going on. Not extremely confident, as we don’t understand consciousness well enough yet to be able to eliminate possible alternate implementations which don’t resemble the known ones, but still fairly confident.
I think it is an important point to distinguish between feelings (aka response) and consciousness. I am not sure how to distinguish these two things. And elevation of ‘consciousness’ does not dismiss ‘feelings’.
I’m not sure what you’re getting at here by binding “feelings” and “response”; I think our terminology is getting confused.
I’ll clarify my earlier comment by saying that when it comes to figuring out if something is an entity which I should behave morally towards, I’m only really interested in conscious feelings. Response to stimuli alone, without conscious experience, shouldn’t have any moral weight. And inversely, if something can consciously experience pain but is unable to respond to it, it is immoral to hurt it.
Ah. I see consciousness as the ability to interrupt ‘instinctive’ response with a measured or planned response. And feelings as the middle stage between action and reaction, conscious or no.
I do not privilege conscious experience, just because I absolutely enjoy it. It sounds like you do.
Empathy has serious shortcomings… compared to what?
For example, I consider it a feature that we are more likely to emphatize with children and puppies than cockroaches just because they are cute and more similar to us. Or what about rocks? (who said that rocks don’t have feelings? they are just… different. We bastards constantly fail to understand them.)
As there is no Universal Ethics of the Universe, the only thing we can compare our own implementation of morality is our (probably incomplete) models of it, then we do the evaluation… again using our built-in system on those corner cases, declaring that we’ve found a bug… (and in this case, it seems to be wise to think twice before fixing them in order to build an Even More Friendly AI....)
Nevertheless, I agree that simulating the feelings of others is not all there is to morality. Consider Schelling points, that (in my view) nicely explain “fairness” without a little bit of feeling for your enemies. Or… of course, it is a little mixed up (empathy as an implementation of Schelling points?)...
Well, not to be snippy, but the answer to that question is: neurologists.
Since when have neurologists studied rocks? The whimsical suggestion that rocks might have feelings is somewhat akin to the less whimsical suggestion that there are lots of things that may have ‘feelings’ that we do not easily or usually detect, or can not detect without special equipment.
And some of these feelings (like bio-communication in plants), while measurable, we usually don’t care that much about, and empathy for the pain of plants (and animals) may interfere with empathy for the pain of people, if you take compassion fatigue into consideration.
I’d argue that bio-communication in plants is probably not any actual kind of feeling or qualia. This is what I meant when I referred to neurologists; currently science has a fair (though vague) idea as to what sort of structure enables human feelings and experiences. When we don’t detect this structure or anything analogous to it, we can be pretty confident that there’s no consciousness/qualia/nameless-redness/etc. going on. Not extremely confident, as we don’t understand consciousness well enough yet to be able to eliminate possible alternate implementations which don’t resemble the known ones, but still fairly confident.
I think it is an important point to distinguish between feelings (aka response) and consciousness. I am not sure how to distinguish these two things. And elevation of ‘consciousness’ does not dismiss ‘feelings’.
I’m not sure what you’re getting at here by binding “feelings” and “response”; I think our terminology is getting confused.
I’ll clarify my earlier comment by saying that when it comes to figuring out if something is an entity which I should behave morally towards, I’m only really interested in conscious feelings. Response to stimuli alone, without conscious experience, shouldn’t have any moral weight. And inversely, if something can consciously experience pain but is unable to respond to it, it is immoral to hurt it.
Ah. I see consciousness as the ability to interrupt ‘instinctive’ response with a measured or planned response. And feelings as the middle stage between action and reaction, conscious or no.
I do not privilege conscious experience, just because I absolutely enjoy it. It sounds like you do.