Yeah, good question. Mostly, I think that sticks around because it pays rent and becomes something terminally valued or becomes otherwise cognitively self-sustaining.
That being said, I do think parts of it in fact get trained away, or at least weakened. As you develop more sophisticated theory of mind, that involves learned strategies for contextually suppressing the effects of this neural reuse, so that you can distinguish “how they feel about it” from “how I feel about it” when needed. But I think the lazy, shared representations remain the default, because they require less mental bookkeeping (whose feelings were those, again?), they’re really useful for making predictions in the typical case (it usually doesn’t hurt), and they make transfer learning straightforward.
EDIT: These are beliefs held lightly. I think it is plausible that a more active intervention is required like “some mechanism to detect when a thought is an empathetic simulation, and then it can just choose not to send an error signal in that circumstance”, or something similar, as you mentioned in the linked comment.
Sure, but there’s a question about how much control “you” have. For example, say I’m a guy who really likes scary movies, and I really liked Alien the first 7 times I watched it, but it’s been getting less scary each time I watch it. I really want it to feel super-scary again the 8th time I watch it. But that’s not under my control.
I think there’s kinda a “I should be scared right now” “head” on the world-model and it has gradually learned over the previous 7 viewings that none of the scenes depicted in Aliens are actually threatening to me. I have some conscious control over e.g. what to attend to / think about while watching, whether to take drugs first, etc., and those will have some effect over how scared I feel, but this is a case where I’m mostly powerless, I think.
I think in the absence of “more active intervention” like your last paragraph, something similar would happen with “feeling good about a situation which will lead to another person feeling reward”. Even if I terminally value that, it’s not clear that there would be anything I could do about it.
But I think the lazy, shared representations remain the default
I dunno, it’s not obvious to me that prosocial concern for others is “default” while status competition and outgroup-hatred and jealousy and flirting and all those other things are “non-default”.
Hmm, well really, maybe it’s not worth arguing about. We need to explain these antisocial behaviors one way or the other. My hunch is that when we can explain status competition, we’ll naturally see that those same mechanisms, whatever they are (“more active interventions” as in your last paragraph) are probably also involved in prosocial empathetic concern.
Well anyway, let’s explain status drive and then we can cross that bridge when we get to it. :)
Do you have a take on why / how that doesn’t get trained away?
Yeah, good question. Mostly, I think that sticks around because it pays rent and becomes something terminally valued or becomes otherwise cognitively self-sustaining.
That being said, I do think parts of it in fact get trained away, or at least weakened. As you develop more sophisticated theory of mind, that involves learned strategies for contextually suppressing the effects of this neural reuse, so that you can distinguish “how they feel about it” from “how I feel about it” when needed. But I think the lazy, shared representations remain the default, because they require less mental bookkeeping (whose feelings were those, again?), they’re really useful for making predictions in the typical case (it usually doesn’t hurt), and they make transfer learning straightforward.
EDIT: These are beliefs held lightly. I think it is plausible that a more active intervention is required like “some mechanism to detect when a thought is an empathetic simulation, and then it can just choose not to send an error signal in that circumstance”, or something similar, as you mentioned in the linked comment.
Sure, but there’s a question about how much control “you” have. For example, say I’m a guy who really likes scary movies, and I really liked Alien the first 7 times I watched it, but it’s been getting less scary each time I watch it. I really want it to feel super-scary again the 8th time I watch it. But that’s not under my control.
I think there’s kinda a “I should be scared right now” “head” on the world-model and it has gradually learned over the previous 7 viewings that none of the scenes depicted in Aliens are actually threatening to me. I have some conscious control over e.g. what to attend to / think about while watching, whether to take drugs first, etc., and those will have some effect over how scared I feel, but this is a case where I’m mostly powerless, I think.
I think in the absence of “more active intervention” like your last paragraph, something similar would happen with “feeling good about a situation which will lead to another person feeling reward”. Even if I terminally value that, it’s not clear that there would be anything I could do about it.
I dunno, it’s not obvious to me that prosocial concern for others is “default” while status competition and outgroup-hatred and jealousy and flirting and all those other things are “non-default”.
Hmm, well really, maybe it’s not worth arguing about. We need to explain these antisocial behaviors one way or the other. My hunch is that when we can explain status competition, we’ll naturally see that those same mechanisms, whatever they are (“more active interventions” as in your last paragraph) are probably also involved in prosocial empathetic concern.
Well anyway, let’s explain status drive and then we can cross that bridge when we get to it. :)