Still working my way through reading this series—it is the best thing I have read in quite a while and I’m very grateful you wrote it!
I feel like I agree with your take on “little glimpses of empathy” 100%.
I think fear of strangers could be implemented without a steering subsystem circuit maybe? (Should say up front I don’t know more about developmental psychology/neuroscience than you do, but here’s my 2c anyway). Put aside whether there’s another more basic steering subsystem circuit for agency detection; we know that pretty early on, through some combination of instinct and learning from scratch, young humans and many animals learn there are agents in the world who move in ways that don’t conform to the simple rules of physics they are learning. These agents seem to have internally driven and unpredictable behavior, in the sense their movement can’t be predicted by simple rules like “objects tend to move to the ground unless something stops them” or “objects continue to maintain their momentum”. It seems like a young human could learn an awful lot of that from scratch, and even develop (in their thought generator) a concept of an agent.
Because of their unpredictability, agent concepts in the thought generator would be linked to thought assessor systems related to both reward and fear; not necessarily from prior learning derived from specific rewarding and fearful experiences, but simply because, as their behavior can’t be predicted with intuitive physics, there remains a very wide prior on what will happen when an agent is present.
In that sense, when a neocortex is first formed, most things in the world are unpredictable to it, and an optimally tuned thought generator+assessor would keep circuits active for both reward or harm. Over time, as the thought generator learns folk physics, most physical objects can be predicted, and it typically generates thoughts in line with their actual beahavior. But agents are a real wildcard: their behavior can’t be predicted by folk physics, and so they perceived in a way that every other object in the world used to be: unpredictable, and thus continually predicting both reward and harm in an opponent process that leads to an ambivalent and uneasy neutral. This story predicts that individual differences in reward and threat sensitivity would particularly govern the default reward/threat balance otherwise unknown items. It might (I’m really REALLY reaching here) help to explain why attachment styles seem so fundamentally tied to basic reward and threat sensitivity.
As the thought generator forms more concepts about agents, it might even learn that agents can be classified with remarkable predictive power into “friend” or “foe” categories, or perhaps “mommy/carer” and “predator” categories. As a consequence of how rocks behave (with complete indifference towards small children), it’s not so easy to predict behavior of, say, falling rocks with “friend” or “foe” categories. On the contrary, agents around a child are often not indifferent to children, making it simple for the child to predict whether favorable things will happen around any particular agent by classifying agents into “carer” or “predator” categories. These categories can be entirely learned; clusters of neurons in the thought generator that connect to reward and threat systems in the steering system and/or thought assessor. So then the primary task of learning to predict agents is simply whether good things or bad things happen around the agent, as judged by the steering system.
This story would also predict that, before the predictive power of categorizing agents into “friend” vs. “foe” categories has been learned, children wouldn’t know to place agents into these categories. They’d take longer to learn whether an agent is trustworthy or not, particularly so if they haven’t learned what an agent is yet. As they grow older, they get more comfortable with classifying agents into “friend” or “foe” categories and would need fewer exemplars to learn to trust (or distrust!) a particular agent.
Still working my way through reading this series—it is the best thing I have read in quite a while and I’m very grateful you wrote it!
I feel like I agree with your take on “little glimpses of empathy” 100%.
I think fear of strangers could be implemented without a steering subsystem circuit maybe? (Should say up front I don’t know more about developmental psychology/neuroscience than you do, but here’s my 2c anyway). Put aside whether there’s another more basic steering subsystem circuit for agency detection; we know that pretty early on, through some combination of instinct and learning from scratch, young humans and many animals learn there are agents in the world who move in ways that don’t conform to the simple rules of physics they are learning. These agents seem to have internally driven and unpredictable behavior, in the sense their movement can’t be predicted by simple rules like “objects tend to move to the ground unless something stops them” or “objects continue to maintain their momentum”. It seems like a young human could learn an awful lot of that from scratch, and even develop (in their thought generator) a concept of an agent.
Because of their unpredictability, agent concepts in the thought generator would be linked to thought assessor systems related to both reward and fear; not necessarily from prior learning derived from specific rewarding and fearful experiences, but simply because, as their behavior can’t be predicted with intuitive physics, there remains a very wide prior on what will happen when an agent is present.
In that sense, when a neocortex is first formed, most things in the world are unpredictable to it, and an optimally tuned thought generator+assessor would keep circuits active for both reward or harm. Over time, as the thought generator learns folk physics, most physical objects can be predicted, and it typically generates thoughts in line with their actual beahavior. But agents are a real wildcard: their behavior can’t be predicted by folk physics, and so they perceived in a way that every other object in the world used to be: unpredictable, and thus continually predicting both reward and harm in an opponent process that leads to an ambivalent and uneasy neutral. This story predicts that individual differences in reward and threat sensitivity would particularly govern the default reward/threat balance otherwise unknown items. It might (I’m really REALLY reaching here) help to explain why attachment styles seem so fundamentally tied to basic reward and threat sensitivity.
As the thought generator forms more concepts about agents, it might even learn that agents can be classified with remarkable predictive power into “friend” or “foe” categories, or perhaps “mommy/carer” and “predator” categories. As a consequence of how rocks behave (with complete indifference towards small children), it’s not so easy to predict behavior of, say, falling rocks with “friend” or “foe” categories. On the contrary, agents around a child are often not indifferent to children, making it simple for the child to predict whether favorable things will happen around any particular agent by classifying agents into “carer” or “predator” categories. These categories can be entirely learned; clusters of neurons in the thought generator that connect to reward and threat systems in the steering system and/or thought assessor. So then the primary task of learning to predict agents is simply whether good things or bad things happen around the agent, as judged by the steering system.
This story would also predict that, before the predictive power of categorizing agents into “friend” vs. “foe” categories has been learned, children wouldn’t know to place agents into these categories. They’d take longer to learn whether an agent is trustworthy or not, particularly so if they haven’t learned what an agent is yet. As they grow older, they get more comfortable with classifying agents into “friend” or “foe” categories and would need fewer exemplars to learn to trust (or distrust!) a particular agent.