But would you agree that a hypothetical (but impossible) perfectly empathetic person would always be able to make a moral choice as good as any other system?
No, I would not. I am suspicious of this entire line of argument, hence my comment (any time someone says ‘X is flawed, so we need more of X’, I begin to wonder). I suspect that if you broaden empathy to try to patch up the observed problems, all you are doing is re-inventing, in a very obtuse way, utilitarianism or some other standard system of ethics.
(On what empathetic grounds are you criticizing the examples of empathy-gone-wrong? None at all, I suspect, but comparing them to what utilitarianism says.)
On what empathetic grounds are you criticizing the examples of empathy-gone-wrong? None at all, I suspect, but comparing them to what utilitarianism says.
That’s ridiculous. I feel very sorry for all the people in line that had to watch Sherri glibly skip up to the top, and I’m sorry they didn’t get a chance to tell their stories.
But kidding aside, there may be some truth to your statement because I might be led to a definition of empathy as appreciating the value of someone’s utilons. For example, Sherri gets some utilons for skipping the line and everyone else loses some (except for those they gain by empathizing with Sherri).
So the end result is the same—you want to maximize utilons. However—and I think this is an important point—you use empathy to figure out what it is that each person in the equation values, as it is different for each person. How else are you to know what utilities to assign to each person for each of the outcomes?
That’s ridiculous. I feel very sorry for all the people in line that had to watch Sherri glibly skip up to the top, and I’m sorry they didn’t get a chance to tell their stories.
Sure, you say that now...
I might be led to a definition of empathy as appreciating the value of someone’s utilons. For example, Sherri gets some utilons for skipping the line and everyone else loses some
What does the word ‘empathy’ buy you there? If you don’t already appreciate the value of someone else’s utilons, in what sense are you a regular utilitarian? Aren’t you just anything-but-utilitarian? (Ethical egoism ‘only my utilons count’, deontologies ‘only law-tilons count’, etc.)
How else are you to know what utilities to assign to each person for each of the outcomes?
The way utilitarians have always done—revealed preference, willingess to pay, neural or behavioral correlates, self-reports, longevity, etc. Empathy seems no better than any of those to me. (I can’t imagine I would be able to accurately assess utility for a masochist just by trying to employ my empathy!)
(I can’t imagine I would be able to accurately assess utility for a masochist just by trying to employ my empathy!)
I think that might be a better strike against empathetic ethics than anything the OP presents, actually. Empathy’s effectiveness as a moral guide is strictly limited by its ability to model others’ hedonic responses, an ability constrained both by the breadth of hedonic variation in the environment and by the modeler’s imagination. That doesn’t even work too well for heterogenous, multicultural societies like most of the First World—you need to take a meta-ethical approach to keep it from breaking down over sexual and religious points, for example—so I’d expect it to be completely inadequate for problems involving nonhuman or transhuman agents. Which needn’t be speculative; animal rights would qualify, as would corporate ethics.
The semantics are important in understanding the debate. Perhaps that is obvious to the rationalists here, but it seems to me this was essentially a semantic debate (the last few comments between gwern and byrnema), with which I tend to agree with byrnema. Perhaps it could be helpful to clearly define the expected denotation of “empathy”? Wikipedia states “Empathy is the capacity to recognize and, to some extent, share feelings (such as sadness or happiness) that are being experienced by another sapient or semi-sapient being” whereas dictionary dot com defines it quite a bit more broadly as “1.
the intellectual identification with or vicarious experiencing of the feelings, thoughts, or attitudes of another.
2.
the imaginative ascribing to an object, as a natural object or work of art, feelings or attitudes present in oneself: By means of empathy, a great painting becomes a mirror of the self. ”
I guess my point is I have always thought of the term in the broader sense, and I don’t think anyone can have any understanding whatsoever without the broader form of “Empathy”. Perhaps my own connotations are filtering in there too.
I agree with Nornagest, empathy’s effectiveness as a moral guide is strictly limited by ability to model others’ [values]. (I struck out “it’s” ability—empathy doesn’t have inherent limits, though we do.)
What does the word ‘empathy’ buy you there?
For me, ‘empathy’ just means determining in any way whatsoever what people value, and how much. So it would include everything you mentioned. Empathy means just accurately estimating what someone else values. Since preferences are indeed hidden (or exaggerated!), it’s a high-level prediction/interpolation game. If you share a lot in common with someone, it helps a lot if you can ‘feel’ what they are feeling by imagining being them and this is often what is meant by empathy. The feelings/emotions of empathy are also very important because they lend a common currency for value. For example, it something makes someone sad, I can relate to their being sad without relating to that particular reason. Also, I can measure how sad it makes them as a quantitative input in my equation. The problem with trying to empathize with a computer (or to an animal, see Nornagest’s comment) is it is difficult to know how to weight their expressed preferences.
If my concept of empathy is accepted, then empathy buys me the ability to model other people’s preferences. You can’t apply utilitarianism in which there is a term for other people’s preferences without it.
Note that your concept of empathy includes cultivating a social circle in which people honestly and accurately report their own preferences when asked and then explicitly asking someone in that circle for their preferences. It includes reading the results of studies on the revealed preferences of different communities and assuming that someone shares the most common preferences of the community they demographically match.
More generally, it includes a number of things that completely ignore any impressions you might garner about an individual’s preferences by observation of that individual.
I agree that your concept of empathy is a useful thing to have.
I also think it fails to map particularly closely to the concept “empathy” typically evokes in conversation with native English speakers.
Note that your concept of empathy includes cultivating a social circle in which people honestly and accurately report their own preferences when asked and then explicitly asking someone in that circle for their preferences. It includes reading the results of studies on the revealed preferences of different communities and assuming that someone shares the most common preferences of the community they demographically match.
Huh. I don’t entirely see where you are getting this. I’ll reread my comment, but what I meant is that empathy is accurately modeling people’s preferences using any means whatsoever.
Sometimes people use the term empathy to mean (for example, with respect to a ‘bleeding heart’) that the empathetic person tries very sincerely to model other people’s preferences and weights those preferences strongly. Also, empathy can mean that a person relies solely on predicting emotions for modeling preferences. I’m not sure how prevalent these different definitions are but regarding “your concept of empathy is a useful thing to have”, thanks.
A good distinguishing question for the common concept of empathy might be to ask-the-audience if a sociopath could have empathy. That is, consider a sociopath that is really good at modeling other people’s preferences but simply doesn’t weight other people’s preferences in their utility function. Could this person be said to ‘have empathy’?
If the answer is decidely ‘no’, then it seems a common concept of empathy might really be about a feeling a person has about the importance of other people’s preferences, depending on whether ‘accuracy’ is or isn’t also required.
Agreed that that’s a good distinguishing question. I predict that audiences of native English speakers who have not been artificially primed otherwise will say the sociopath lacks empathy.
As for not seeing where I’m getting what you quote… I’m confused. Those are two plausible techniques for arriving at accurate models of other people’s preferences; would they not count as ‘empathy’ in your lexicon?
As for not seeing where I’m getting what you quote… I’m confused.
I’m confused too. I read your comments over again today and they made sense. I kept making the same consistent mistake (at least 3 times) that you were defining rather than giving examples.
… I applied some google-foo and not having empathy is one of the defining characteristics of sociopaths, and then the first definition given seems pretty straight-forward:
The ability to understand and share the feelings of another.
I’m happy with that definition, and it doesn’t change much. On the one hand, to understand the feelings of another you’ve got to have a good model for their preferences. Then sharing their feelings is the human/connection aspect of it.
In relation to this thread, I would refine that (perfect) empathy would be more than enough to be moral since understanding another person’s preferences is the first part of it. (I don’t think the second part is necessary for morality, but it makes it more natural.)
No, I would not. I am suspicious of this entire line of argument, hence my comment (any time someone says ‘X is flawed, so we need more of X’, I begin to wonder). I suspect that if you broaden empathy to try to patch up the observed problems, all you are doing is re-inventing, in a very obtuse way, utilitarianism or some other standard system of ethics.
(On what empathetic grounds are you criticizing the examples of empathy-gone-wrong? None at all, I suspect, but comparing them to what utilitarianism says.)
That’s ridiculous. I feel very sorry for all the people in line that had to watch Sherri glibly skip up to the top, and I’m sorry they didn’t get a chance to tell their stories.
But kidding aside, there may be some truth to your statement because I might be led to a definition of empathy as appreciating the value of someone’s utilons. For example, Sherri gets some utilons for skipping the line and everyone else loses some (except for those they gain by empathizing with Sherri).
So the end result is the same—you want to maximize utilons. However—and I think this is an important point—you use empathy to figure out what it is that each person in the equation values, as it is different for each person. How else are you to know what utilities to assign to each person for each of the outcomes?
Sure, you say that now...
What does the word ‘empathy’ buy you there? If you don’t already appreciate the value of someone else’s utilons, in what sense are you a regular utilitarian? Aren’t you just anything-but-utilitarian? (Ethical egoism ‘only my utilons count’, deontologies ‘only law-tilons count’, etc.)
The way utilitarians have always done—revealed preference, willingess to pay, neural or behavioral correlates, self-reports, longevity, etc. Empathy seems no better than any of those to me. (I can’t imagine I would be able to accurately assess utility for a masochist just by trying to employ my empathy!)
I think that might be a better strike against empathetic ethics than anything the OP presents, actually. Empathy’s effectiveness as a moral guide is strictly limited by its ability to model others’ hedonic responses, an ability constrained both by the breadth of hedonic variation in the environment and by the modeler’s imagination. That doesn’t even work too well for heterogenous, multicultural societies like most of the First World—you need to take a meta-ethical approach to keep it from breaking down over sexual and religious points, for example—so I’d expect it to be completely inadequate for problems involving nonhuman or transhuman agents. Which needn’t be speculative; animal rights would qualify, as would corporate ethics.
Beware of other-optimizing, essentially.
Those are good points.
Yes, but just to iterate: it’s a failure to empathize not a failure of empathy.
The semantics are important in understanding the debate. Perhaps that is obvious to the rationalists here, but it seems to me this was essentially a semantic debate (the last few comments between gwern and byrnema), with which I tend to agree with byrnema. Perhaps it could be helpful to clearly define the expected denotation of “empathy”? Wikipedia states “Empathy is the capacity to recognize and, to some extent, share feelings (such as sadness or happiness) that are being experienced by another sapient or semi-sapient being” whereas dictionary dot com defines it quite a bit more broadly as “1. the intellectual identification with or vicarious experiencing of the feelings, thoughts, or attitudes of another. 2. the imaginative ascribing to an object, as a natural object or work of art, feelings or attitudes present in oneself: By means of empathy, a great painting becomes a mirror of the self. ”
I guess my point is I have always thought of the term in the broader sense, and I don’t think anyone can have any understanding whatsoever without the broader form of “Empathy”. Perhaps my own connotations are filtering in there too.
Yes, exactly.
I agree with Nornagest, empathy’s effectiveness as a moral guide is strictly limited by ability to model others’ [values]. (I struck out “it’s” ability—empathy doesn’t have inherent limits, though we do.)
For me, ‘empathy’ just means determining in any way whatsoever what people value, and how much. So it would include everything you mentioned. Empathy means just accurately estimating what someone else values. Since preferences are indeed hidden (or exaggerated!), it’s a high-level prediction/interpolation game. If you share a lot in common with someone, it helps a lot if you can ‘feel’ what they are feeling by imagining being them and this is often what is meant by empathy. The feelings/emotions of empathy are also very important because they lend a common currency for value. For example, it something makes someone sad, I can relate to their being sad without relating to that particular reason. Also, I can measure how sad it makes them as a quantitative input in my equation. The problem with trying to empathize with a computer (or to an animal, see Nornagest’s comment) is it is difficult to know how to weight their expressed preferences.
If my concept of empathy is accepted, then empathy buys me the ability to model other people’s preferences. You can’t apply utilitarianism in which there is a term for other people’s preferences without it.
Well, OK.
Note that your concept of empathy includes cultivating a social circle in which people honestly and accurately report their own preferences when asked and then explicitly asking someone in that circle for their preferences. It includes reading the results of studies on the revealed preferences of different communities and assuming that someone shares the most common preferences of the community they demographically match.
More generally, it includes a number of things that completely ignore any impressions you might garner about an individual’s preferences by observation of that individual.
I agree that your concept of empathy is a useful thing to have.
I also think it fails to map particularly closely to the concept “empathy” typically evokes in conversation with native English speakers.
Huh. I don’t entirely see where you are getting this. I’ll reread my comment, but what I meant is that empathy is accurately modeling people’s preferences using any means whatsoever.
Sometimes people use the term empathy to mean (for example, with respect to a ‘bleeding heart’) that the empathetic person tries very sincerely to model other people’s preferences and weights those preferences strongly. Also, empathy can mean that a person relies solely on predicting emotions for modeling preferences. I’m not sure how prevalent these different definitions are but regarding “your concept of empathy is a useful thing to have”, thanks.
A good distinguishing question for the common concept of empathy might be to ask-the-audience if a sociopath could have empathy. That is, consider a sociopath that is really good at modeling other people’s preferences but simply doesn’t weight other people’s preferences in their utility function. Could this person be said to ‘have empathy’?
If the answer is decidely ‘no’, then it seems a common concept of empathy might really be about a feeling a person has about the importance of other people’s preferences, depending on whether ‘accuracy’ is or isn’t also required.
Agreed that that’s a good distinguishing question. I predict that audiences of native English speakers who have not been artificially primed otherwise will say the sociopath lacks empathy.
As for not seeing where I’m getting what you quote… I’m confused. Those are two plausible techniques for arriving at accurate models of other people’s preferences; would they not count as ‘empathy’ in your lexicon?
I’m confused too. I read your comments over again today and they made sense. I kept making the same consistent mistake (at least 3 times) that you were defining rather than giving examples.
Ah! Yes, OK, the conversation makes sense now. Thanks for saying that out loud.
… I applied some google-foo and not having empathy is one of the defining characteristics of sociopaths, and then the first definition given seems pretty straight-forward:
I’m happy with that definition, and it doesn’t change much. On the one hand, to understand the feelings of another you’ve got to have a good model for their preferences. Then sharing their feelings is the human/connection aspect of it.
In relation to this thread, I would refine that (perfect) empathy would be more than enough to be moral since understanding another person’s preferences is the first part of it. (I don’t think the second part is necessary for morality, but it makes it more natural.)