It fits perfectly, thanks! Yes, there’s a bunch of other mechanisms/phenomena, such as— the developmental windows for learning speech and language, - the spectrum of reactions to distress (anger, fear, etc.), - the palmar grasp reflex. Basically I’m interested in all biological mechanisms that control our learning, not just affects, and even if they seem irrelevant for AI purposes. As can be seen from Kaj’s post there, the way to get these systems to work might be nonintuitive, so every little hint will help in the end.
I think another post might be in order to fully explore the list of all of these biological mechanisms at some point, maybe as a pitstop before going into the full deal.
I have found a source of some more plausible mechanisms tied to common emotions here: Dares, costly signals, and psychopaths (which references The Psychopath Code, see raw text on Github). These sources are focused on psychopaths but give extremely well-suited descriptions of the following classes of emotions:
The predator emotions help us hunt and capture prey.
The defense emotions prepare us to detect and deal with predators and competitors.
The sexual emotions drive us to find sexual partners.
The family emotions let us talk to our parents and care for our offspring.
The group emotions let us form small social groups.
The social emotions let us form looser and larger social groups.
Some examples:
Hunger [...] Your digestion slows. Your vision and hearing gets sharper and you focus on distinguishing prey from threats. You feel the need to move, yet you are careful to stay invisible. You walk without haste, and keep your posture relaxed. Your breathing is regular, slow.
Euphoria [...] Your hearing switches off and your vision tunnels in on your target. Your breathing and heartbeat accelerate. Blood flows to your muscles, and glucose feeds into your blood. Your eyes widen, your mouth opens, and you bare your teeth.
Surprise [...] “startle response.” You flinch away from the threat, and raise your arms in self-defense. You lift your eyebrows and open your eyes wide to see better. Your hearing gets sharp. You exhale hard to clear your lungs of carbon dioxide. Your heart accelerates and you breathe in deep to oxygenate your body for action.
Love - [...] We establish “closeness” by mutual physical contact. The kinds of contact depend on the relationship. The closer you are to another person the more you feel the emotion. Your eyebrows rise, your pupils widen, you smile and laugh and feel happy. You use open and dominant body language. You are more childlike: playful and uninhibited. You seek more contact. You need less sleep.
All of the descriptions are like this, and I think an excellent source when looking for mechanisms that facilitate the recognition of the more abstract patterns.
Fear of height could work like the spider thing: The visual system detecting “height” based on depth information and a downward look and respond like in fear of spider thing with increased heart-rate and attention.
What we find beautiful could come from a heart-rate increase and or other positively valued responses to low-complexity visual cues like
the smoothness of visual features or easy to predict patterns (“clear forms”) - at least easy to predict for shallow neuronal networks
Same for sound patterns—or maybe a spectrum with many small peaks as in surf or wind sounds or cafe conversation.
The visual cues for sexual attraction are relatively well-known. Obviously, the strength of the endocrine response is high. It is plausible that the high number of different fetishes can be explained by the brain learning to associate anything with such a strong signal. Not just a single specific thing as in the spider response.
There’s one issue that I don’t have an answer yet: how would the visual system detect “height”? Could we presume there is a spatial engine that needs to be taught first, and then linked to this phobia?
Or would it make sense to have a straight link to a spatial predictive system instead, and if the system would predict that there’s some uncertainty in if the agent suddenly needs more space to maneuver, and then that space is instead occupied with a void? At least *I* cannot look up when the fear of heights triggers, and get a sudden sensation of vertigo: I need to know where the closest brace-point is when I know falling might be imminent.
The visual system wouldn’t detect the abstract concept of height, and that would be the brain’s job to figure out by being primed on when the thing triggers and what else correlates with it.
I imagine the visual system would detect visual depth from binocular vision. Babies learn this in the first few months. It is one of the things that cause them distress when it gets activated in the brain. I don’t know the research papers, but these might be starting pointers:
So visual depth you have without much learning—or with other priming steps ahead of that; I understand these are well researched). What is left is the vertical component, and I guess that it comes from the vestibular system. Looking down + visual depths = height trigger.
It is funny that you mention the need to grasp something, and maybe that is the hard-wired cue: Close the hand.
If I understood correctly, babies cannot focus their eyes properly for the first two months, and this may indicate they are learning some universal 3D-spatial models into their heads, as a prerequisite for many of the other instincts they have as later developmental windows. So there has to be some thread of signals that string this system to the later affects/instincts, such as the fear of heights.
It is also funny to relate the ability of many ungulate babies ability to walk immediately on birth, meaning there has to be some seriously robust set of instincts that coordinate this for them. This blurs the … requirements… between instinctual and learned coordination, but I believe in the end all cortex-having brains would benefit from moving away from instincts and into learned models.
It fits perfectly, thanks!
Yes, there’s a bunch of other mechanisms/phenomena, such as—
the developmental windows for learning speech and language,
- the spectrum of reactions to distress (anger, fear, etc.),
- the palmar grasp reflex.
Basically I’m interested in all biological mechanisms that control our learning, not just affects, and even if they seem irrelevant for AI purposes. As can be seen from Kaj’s post there, the way to get these systems to work might be nonintuitive, so every little hint will help in the end.
I think another post might be in order to fully explore the list of all of these biological mechanisms at some point, maybe as a pitstop before going into the full deal.
I have found a source of some more plausible mechanisms tied to common emotions here: Dares, costly signals, and psychopaths (which references The Psychopath Code, see raw text on Github). These sources are focused on psychopaths but give extremely well-suited descriptions of the following classes of emotions:
The predator emotions help us hunt and capture prey.
The defense emotions prepare us to detect and deal with predators and competitors.
The sexual emotions drive us to find sexual partners.
The family emotions let us talk to our parents and care for our offspring.
The group emotions let us form small social groups.
The social emotions let us form looser and larger social groups.
Some examples:
All of the descriptions are like this, and I think an excellent source when looking for mechanisms that facilitate the recognition of the more abstract patterns.
Other things that are candidates:
Fear of height could work like the spider thing: The visual system detecting “height” based on depth information and a downward look and respond like in fear of spider thing with increased heart-rate and attention.
What we find beautiful could come from a heart-rate increase and or other positively valued responses to low-complexity visual cues like
the smoothness of visual features or easy to predict patterns (“clear forms”) - at least easy to predict for shallow neuronal networks
Same for sound patterns—or maybe a spectrum with many small peaks as in surf or wind sounds or cafe conversation.
The visual cues for sexual attraction are relatively well-known. Obviously, the strength of the endocrine response is high. It is plausible that the high number of different fetishes can be explained by the brain learning to associate anything with such a strong signal. Not just a single specific thing as in the spider response.
There’s one issue that I don’t have an answer yet: how would the visual system detect “height”?
Could we presume there is a spatial engine that needs to be taught first, and then linked to this phobia?
Or would it make sense to have a straight link to a spatial predictive system instead, and if the system would predict that there’s some uncertainty in if the agent suddenly needs more space to maneuver, and then that space is instead occupied with a void? At least *I* cannot look up when the fear of heights triggers, and get a sudden sensation of vertigo: I need to know where the closest brace-point is when I know falling might be imminent.
The visual system wouldn’t detect the abstract concept of height, and that would be the brain’s job to figure out by being primed on when the thing triggers and what else correlates with it.
I imagine the visual system would detect visual depth from binocular vision. Babies learn this in the first few months. It is one of the things that cause them distress when it gets activated in the brain. I don’t know the research papers, but these might be starting pointers:
https://www.beltz.de/fileadmin/beltz/leseproben/978-3-621-27926-0.pdf (picture 2.2, German)
https://www.thewonderweeks.com/babys-mental-leaps-first-year/ (week 26)
So visual depth you have without much learning—or with other priming steps ahead of that; I understand these are well researched). What is left is the vertical component, and I guess that it comes from the vestibular system. Looking down + visual depths = height trigger.
It is funny that you mention the need to grasp something, and maybe that is the hard-wired cue: Close the hand.
If I understood correctly, babies cannot focus their eyes properly for the first two months, and this may indicate they are learning some universal 3D-spatial models into their heads, as a prerequisite for many of the other instincts they have as later developmental windows. So there has to be some thread of signals that string this system to the later affects/instincts, such as the fear of heights.
It is also funny to relate the ability of many ungulate babies ability to walk immediately on birth, meaning there has to be some seriously robust set of instincts that coordinate this for them. This blurs the … requirements… between instinctual and learned coordination, but I believe in the end all cortex-having brains would benefit from moving away from instincts and into learned models.
Related: Gene for upright walk in humans discovered: https://www.theguardian.com/science/2008/jun/02/genetics.medicalresearch
Then this one might also be relevant: Inner Alignment in Salt-Starved Rats
I’ll have to read this one too, thanks.