In envy, if a little glimpse of empathy indicates that someone is happy, it makes me unhappy.
In schadenfreude, if a little glimpse of empathy indicates that someone is unhappy, it makes me happy.
When I’m angry, if a little glimpse of empathy indicates that the person I’m talking to is happy and calm, it sometimes makes me even more angry!
How sure are you that these are instances of empathy (defining it as “prediction by our own latent world model of ourselves being happy/unhappy soon”)? If I imagine myself in these examples, it doesn’t introspectively feel like I am reacting to an impression of their internal state, but rather like I am directly reacting to their social behavior (e.g., abstractly speaking, a learned reflex of status-reasserting anger when someone else displays high status through happy and calm behavior).
This would also cleanly solve the mysteries of why they don’t get updated and how they are distinguished from “other transient feelings” - there’s no wrong prediction by the latent world model involved (nothing to be distinguished or updated), and the social maneuvering doesn’t get negative feedback.
That’s where some instinctive disagreement of mine with that post of yours comes from too. But I also haven’t read through it carefully enough to be sure.
I think I probably don’t follow what you’re saying. It seems to me that people care very much about the internal state of other people. (Not in the sense of “people care that they have veridical beliefs about the internal state of other people”, but in the sense of “people spend a lot of time thinking about the internal state of other people, and their beliefs about those states are very relevant to their reactions”.)
Like, if I am to feel schadenfraude at Alice’s misfortune, it seems to me that it really matters that it’s a misfortune from Alice’s perspective. If I hate swimming and Alice loves it, and then Alice swims, then I wouldn’t feel schadenfraude there, right? And that requires attending to and reacting to (my beliefs about) Alice’s internal state, right?
Again, this seems very obvious to me, which suggests that I’m probably misunderstanding you.
I’m not claiming that people don’t care about other people’s internal states, I’m saying that it introspectively doesn’t feel like that is implemented via empathy (the same part of my world model that predicts my own emotions), but via a different part of my model (dedicated to modeling other people), and that this would solve the “distinguishing-empathy-from-transient-feelings” mystery you talk about.
Additionally (but relatedly), I’m also skeptical that those beliefs are better decribed as being about other people’s internal states rather than as about their social behavior. It seems easy to conflate these if we’re not introspectively precise. E.g., if I imagine myself in your Alice example, I imagine Alice acting happy, smiling and uncaring, and only then is there any reaction—I don’t even feel like I’m *able* to viscerally imagine the abstract concept (prod a part of my world model that represents it) of “Alice is happy”.
But these are still two distinct claims, and the latter assumes the former.
One illustrative example that comes to mind is the huge number of people who experience irrational social anxiety, even though they themselves would never judge themselves if they were in other people’s position.
I’m also skeptical that those beliefs are better decribed as being about other people’s internal states rather than as about their social behavior.
Hmm. Continuing with the schadenfraude example, let’s say Alice stole my kettle and I would feel good if she burned her fingers on it. (Serves her right!) My introspection says, if Alice is alone when she burns her fingers, I’m still happy—that still counts. If I never see her again after that, that still counts. Heck, if she becomes a hermit and never sees another human again, that still counts. And therefore, that thought of Alice burning her fingers is pleasing in a way that is tightly connected to how I believe Alice feels, and disconnected from how I believe Alice is behaving socially, I think.
You mention “I imagine Alice acting happy, smiling and uncaring”. But I feel like the following two things feel very different to me:
“I imagine that Alice is acting happy, smiling and uncaring, and this is straightforwardly related to how she really feels”, versus
“I imagine that Alice is acting happy, smiling and uncaring, but on the inside she’s miserable, and she’s hiding how she really feels”.
What do you think?
I’m saying that it introspectively doesn’t feel like that is implemented via empathy (the same part of my world model that predicts my own emotions), but via a different part of my model (dedicated to modeling other people)
I don’t update much on that because I think almost all of the discourse and intuitions and literature surrounding the word “empathy” are not talking about the same thing that I want to talk about. Thus I tend to avoid the word “empathy” altogether where possible. I’ve been using other terms like “empathetic simulation” or “little glimpse of empathy”. I talk about that a bit in Section 13.5.2 here. More specifically, I’m guessing that it doesn’t “feel like empathy” when you imagine Alice burning her fingers on the kettle she stole from me, because that thought feels good, whereas empathizing with Alice would be unpleasant. Here, my model says “yes the thought feels good, and if that’s not what you think of as “empathy”, then the thing you think of as “empathy” is not what I’m talking about”.
When we think of emotion concepts / categories, the valence / arousal / etc. associated with them are central properties. E.g. righteous indignation has to have positive valence and high arousal, otherwise we would call it something else (and think of it as something else). So if you think a thought that involves lots of the same cortical neurons as you get in typical righteous indignation, but those neurons trigger negative valence and low arousal in the brainstem (because of the empathy-detector intervening, or whatever), it wouldn’t feel anything like righteous indignation introspectively. Or something like that.
Thanks for the reply!
How sure are you that these are instances of empathy (defining it as “prediction by our own latent world model of ourselves being happy/unhappy soon”)? If I imagine myself in these examples, it doesn’t introspectively feel like I am reacting to an impression of their internal state, but rather like I am directly reacting to their social behavior (e.g., abstractly speaking, a learned reflex of status-reasserting anger when someone else displays high status through happy and calm behavior).
This would also cleanly solve the mysteries of why they don’t get updated and how they are distinguished from “other transient feelings” - there’s no wrong prediction by the latent world model involved (nothing to be distinguished or updated), and the social maneuvering doesn’t get negative feedback.
That’s where some instinctive disagreement of mine with that post of yours comes from too. But I also haven’t read through it carefully enough to be sure.
I think I probably don’t follow what you’re saying. It seems to me that people care very much about the internal state of other people. (Not in the sense of “people care that they have veridical beliefs about the internal state of other people”, but in the sense of “people spend a lot of time thinking about the internal state of other people, and their beliefs about those states are very relevant to their reactions”.)
Like, if I am to feel schadenfraude at Alice’s misfortune, it seems to me that it really matters that it’s a misfortune from Alice’s perspective. If I hate swimming and Alice loves it, and then Alice swims, then I wouldn’t feel schadenfraude there, right? And that requires attending to and reacting to (my beliefs about) Alice’s internal state, right?
Again, this seems very obvious to me, which suggests that I’m probably misunderstanding you.
I appreciate the charity!
I’m not claiming that people don’t care about other people’s internal states, I’m saying that it introspectively doesn’t feel like that is implemented via empathy (the same part of my world model that predicts my own emotions), but via a different part of my model (dedicated to modeling other people), and that this would solve the “distinguishing-empathy-from-transient-feelings” mystery you talk about.
Additionally (but relatedly), I’m also skeptical that those beliefs are better decribed as being about other people’s internal states rather than as about their social behavior. It seems easy to conflate these if we’re not introspectively precise. E.g., if I imagine myself in your Alice example, I imagine Alice acting happy, smiling and uncaring, and only then is there any reaction—I don’t even feel like I’m *able* to viscerally imagine the abstract concept (prod a part of my world model that represents it) of “Alice is happy”.
But these are still two distinct claims, and the latter assumes the former.
One illustrative example that comes to mind is the huge number of people who experience irrational social anxiety, even though they themselves would never judge themselves if they were in other people’s position.
Hmm. Continuing with the schadenfraude example, let’s say Alice stole my kettle and I would feel good if she burned her fingers on it. (Serves her right!) My introspection says, if Alice is alone when she burns her fingers, I’m still happy—that still counts. If I never see her again after that, that still counts. Heck, if she becomes a hermit and never sees another human again, that still counts. And therefore, that thought of Alice burning her fingers is pleasing in a way that is tightly connected to how I believe Alice feels, and disconnected from how I believe Alice is behaving socially, I think.
You mention “I imagine Alice acting happy, smiling and uncaring”. But I feel like the following two things feel very different to me:
“I imagine that Alice is acting happy, smiling and uncaring, and this is straightforwardly related to how she really feels”, versus
“I imagine that Alice is acting happy, smiling and uncaring, but on the inside she’s miserable, and she’s hiding how she really feels”.
What do you think?
I don’t update much on that because I think almost all of the discourse and intuitions and literature surrounding the word “empathy” are not talking about the same thing that I want to talk about. Thus I tend to avoid the word “empathy” altogether where possible. I’ve been using other terms like “empathetic simulation” or “little glimpse of empathy”. I talk about that a bit in Section 13.5.2 here. More specifically, I’m guessing that it doesn’t “feel like empathy” when you imagine Alice burning her fingers on the kettle she stole from me, because that thought feels good, whereas empathizing with Alice would be unpleasant. Here, my model says “yes the thought feels good, and if that’s not what you think of as “empathy”, then the thing you think of as “empathy” is not what I’m talking about”.
When we think of emotion concepts / categories, the valence / arousal / etc. associated with them are central properties. E.g. righteous indignation has to have positive valence and high arousal, otherwise we would call it something else (and think of it as something else). So if you think a thought that involves lots of the same cortical neurons as you get in typical righteous indignation, but those neurons trigger negative valence and low arousal in the brainstem (because of the empathy-detector intervening, or whatever), it wouldn’t feel anything like righteous indignation introspectively. Or something like that.