Sometimes, we assign desires and emotions to cars. We say that the car wants to drive, or we say that a strange vibration is the car expressing its displeasure at a too-long gap between oil changes. That’s anthropomorphization: we imagine that non-human objects have desires and emotions driving their behavior. We model the car as having desires (i.e. oil changes), emotions in response to those desires being met or not met, and observable behavior corresponding to those emotions. Of course, the actual cause of the car’s behavior is much more mechanical—low oil or coolant, built-up sludge, etc.
I don’t notice that I’m hungry. I notice that the trash is overflowing and hasn’t been taken out. I feel angry about the trash, so I model myself as angry because of the trash. If someone says “why are you angry?”, I talk about how I want a clean house, and how annoying it is that the trash has not been taken out. But the actual cause is simply low blood sugar, or something like that.
This is anthropomorphization of myself: I imagine that my behavior is driven by some desire (i.e. I want a clean house) and the frustration of not having that desire met. Yet the actual cause is much more mechanical, and unrelated to the supposed desire.
Likewise, we often anthropomorphize other humans. If someone else is hangry, I might notice their anger and ask them what they’re angry about, without realizing that they just haven’t eaten in a while. In general, if I ask someone why they think X or why they decided Y, they’ll come up with a whole explanation for why X or Y makes sense, which may or may not have anything at all to do with the actual causes of their belief/decision—i.e. they rationalize post-hoc. Mistaking that post-hoc justification for the actual cause of the belief/action would be anthropomorphization.
Empathic Reasoning
Empathic reasoning is especially prone to the anthropomorphization failure mode in general, and to anthropomorphization of humans in particular.
Empathic reasoning is all about putting yourself in someone else’ shoes, asking “What do I want? What do I feel?”, and explaining behavior in terms of those wants and feelings. Essentially, empathic reasoning assumes the anthropomorphic hypothesis—it assumes that behavior is a result of desires and emotions—and tries to back out those desires and emotions by simulating oneself in the same situation.
In cases like hangriness, where the real cause diverges heavily from the first-person experience, that’s going to be highly misleading. Empathy may yield a good idea of what the situation feels like/looks like to another person, but the other person’s experience includes a wildly inaccurate model of the underlying causes. If we’re going to leverage empathic reasoning successfully, we need to be very careful about separating what the person perceives from reality, and in particular separating what the person perceives as causing their beliefs/behavior from what actually causes their beliefs/behavior.
Anthropomorphizing Humans
Sometimes, we assign desires and emotions to cars. We say that the car wants to drive, or we say that a strange vibration is the car expressing its displeasure at a too-long gap between oil changes. That’s anthropomorphization: we imagine that non-human objects have desires and emotions driving their behavior. We model the car as having desires (i.e. oil changes), emotions in response to those desires being met or not met, and observable behavior corresponding to those emotions. Of course, the actual cause of the car’s behavior is much more mechanical—low oil or coolant, built-up sludge, etc.
Now, consider hangriness.
I don’t notice that I’m hungry. I notice that the trash is overflowing and hasn’t been taken out. I feel angry about the trash, so I model myself as angry because of the trash. If someone says “why are you angry?”, I talk about how I want a clean house, and how annoying it is that the trash has not been taken out. But the actual cause is simply low blood sugar, or something like that.
This is anthropomorphization of myself: I imagine that my behavior is driven by some desire (i.e. I want a clean house) and the frustration of not having that desire met. Yet the actual cause is much more mechanical, and unrelated to the supposed desire.
Likewise, we often anthropomorphize other humans. If someone else is hangry, I might notice their anger and ask them what they’re angry about, without realizing that they just haven’t eaten in a while. In general, if I ask someone why they think X or why they decided Y, they’ll come up with a whole explanation for why X or Y makes sense, which may or may not have anything at all to do with the actual causes of their belief/decision—i.e. they rationalize post-hoc. Mistaking that post-hoc justification for the actual cause of the belief/action would be anthropomorphization.
Empathic Reasoning
Empathic reasoning is especially prone to the anthropomorphization failure mode in general, and to anthropomorphization of humans in particular.
Empathic reasoning is all about putting yourself in someone else’ shoes, asking “What do I want? What do I feel?”, and explaining behavior in terms of those wants and feelings. Essentially, empathic reasoning assumes the anthropomorphic hypothesis—it assumes that behavior is a result of desires and emotions—and tries to back out those desires and emotions by simulating oneself in the same situation.
In cases like hangriness, where the real cause diverges heavily from the first-person experience, that’s going to be highly misleading. Empathy may yield a good idea of what the situation feels like/looks like to another person, but the other person’s experience includes a wildly inaccurate model of the underlying causes. If we’re going to leverage empathic reasoning successfully, we need to be very careful about separating what the person perceives from reality, and in particular separating what the person perceives as causing their beliefs/behavior from what actually causes their beliefs/behavior.