Does the Lesswrong Path have a blindspot related to emotions?
Julia Galef wrote about how she updated in CFAR towards emotions being more important then initially assumed in 2013. When it comes to dealing with emotions there’s Gendlin’s Focusing and Circling and a discourse on meditation.
The discourse is however more focused on applied knowledge then neurobiology based knowledge.
To be honest, I was hoping to see some discussion on the true nature of the underlying embedding of emotions. What they mean from a computation framework. More importantly recent papers such as on the nature of dopamine as an temporal error propogation signal by google all suggest that dopamine and emotions may actually be the rational manifestation of some sort of RL algorithm based on Karl Friston’s free-energy principle.
The nature of the algorithm is now the nature of the learner and what rule determines the nature of the learner? Likely some complex realization of moving to obtain net plus energy in order to continue to move. From movement, you spawn predictive systems and also as a consequence short term and long term planning systems and finally a single global fractal optimized embedding for the relative comparison of the likelihood an action will lead to a prediction—this relative value in reality a complex embedding in biological bits known as neurotransmitters but more importantly at a language level, and human level. Emotions. Love, Anger, Surprise, Fear, Arousal, Anxiety are all just coarse labels. Even rationality vs irrational are just coarse labels on the emotional tensor across large time scales.
I came here to find other folks thinking about this but I feel that this is still extremely fringe in both the RL world and Neuroscience world let alone the rationality world.
I know this is a strange ask, but your comment resonated the most with my intent. Can you point me or make me an intro offline with anybody you may know?
There’s Lisa Feldman Barrett’s theory of constructed emotion, which applies a predictive processing lens on emotion, and takes a similar perspective to what you said. She has a popular book about it, but it felt to me like it was pretty wordy while also skimming over the more technical details. You could read this summary of the book and combine it with the paper that presents the theory to a more academic audience.
Separately, there’s the model in Unlocking the Emotional Brain, which goes into much less detail about algorithmic detail but draws upon some neuroscience, fits together with a predictive view of emotion, and seems practically useful.
Even rationality vs irrational are just coarse labels on the emotional tensor across large time scales.
There are cases where the word rationality gets used in such a way but it’s not how the word gets used in this community.
I think you make a mistake when you try to reduce emotions to spikes in neurotransmitters. Interacting with emotions via Gendlin’s Focusing suggests that emotions reflect subagents that are more complex then neurotransmitters. Emotions also seem to come with motorcortex activity as they can be felt to be located in body-parts. Given plausible reports that they can be felt in amputed body-parts as well, a main chunk of the process will be in the motor cortex instead of in the actual part of the body where the emotion is felt.
The fact that you have the possibility of an emotional label to produce a fit in Gendlin’s focusing suggests that “Anger” is more then just a coarse label.
I’m myself neither deeply into machine learning nor into neuroscience. I don’t know of someone who cares about both towards which I could point you. That said, if you have ideas writing them up on LessWrong is likely welcome and might get people to give you valuable feedback.
I’ll just point out that the the coarse label is the human intuition and mistake. There is no such label. The instance of anger is a complex encoding of information relating to not “subagents” but to something more fundamental, your “action set.” The coarse resolution of anger is a language one, but biologically, anger does not exist in any form you or I are familiar with.
It seems to me hard to explain why an emotion such anger might release itself when the corresponding emotion subagent gets heard in Gendlin’s Focusing if anger is not related to subagents.
but biologically, anger does not exist in any form you or I are familiar with.
That sounds to me like you are calling something anger that is not the kind of thing most people mean when they say anger.
If you burrow a word like anger to talk about something biological and the biological thing is not matching with what people mean with the term, it suggests that you should rather use a new word for the biological thing you want to talk about.
Julia Galef wrote about how she updated in CFAR towards emotions being more important then initially assumed in 2013. When it comes to dealing with emotions there’s Gendlin’s Focusing and Circling and a discourse on meditation.
The discourse is however more focused on applied knowledge then neurobiology based knowledge.
To be honest, I was hoping to see some discussion on the true nature of the underlying embedding of emotions. What they mean from a computation framework. More importantly recent papers such as on the nature of dopamine as an temporal error propogation signal by google all suggest that dopamine and emotions may actually be the rational manifestation of some sort of RL algorithm based on Karl Friston’s free-energy principle.
The nature of the algorithm is now the nature of the learner and what rule determines the nature of the learner? Likely some complex realization of moving to obtain net plus energy in order to continue to move. From movement, you spawn predictive systems and also as a consequence short term and long term planning systems and finally a single global fractal optimized embedding for the relative comparison of the likelihood an action will lead to a prediction—this relative value in reality a complex embedding in biological bits known as neurotransmitters but more importantly at a language level, and human level. Emotions. Love, Anger, Surprise, Fear, Arousal, Anxiety are all just coarse labels. Even rationality vs irrational are just coarse labels on the emotional tensor across large time scales.
I came here to find other folks thinking about this but I feel that this is still extremely fringe in both the RL world and Neuroscience world let alone the rationality world.
I know this is a strange ask, but your comment resonated the most with my intent. Can you point me or make me an intro offline with anybody you may know?
There’s Lisa Feldman Barrett’s theory of constructed emotion, which applies a predictive processing lens on emotion, and takes a similar perspective to what you said. She has a popular book about it, but it felt to me like it was pretty wordy while also skimming over the more technical details. You could read this summary of the book and combine it with the paper that presents the theory to a more academic audience.
Separately, there’s the model in Unlocking the Emotional Brain, which goes into much less detail about algorithmic detail but draws upon some neuroscience, fits together with a predictive view of emotion, and seems practically useful.
There are cases where the word rationality gets used in such a way but it’s not how the word gets used in this community.
I think you make a mistake when you try to reduce emotions to spikes in neurotransmitters. Interacting with emotions via Gendlin’s Focusing suggests that emotions reflect subagents that are more complex then neurotransmitters. Emotions also seem to come with motorcortex activity as they can be felt to be located in body-parts. Given plausible reports that they can be felt in amputed body-parts as well, a main chunk of the process will be in the motor cortex instead of in the actual part of the body where the emotion is felt.
The fact that you have the possibility of an emotional label to produce a fit in Gendlin’s focusing suggests that “Anger” is more then just a coarse label.
I’m myself neither deeply into machine learning nor into neuroscience. I don’t know of someone who cares about both towards which I could point you. That said, if you have ideas writing them up on LessWrong is likely welcome and might get people to give you valuable feedback.
Thanks.
I’ll just point out that the the coarse label is the human intuition and mistake. There is no such label. The instance of anger is a complex encoding of information relating to not “subagents” but to something more fundamental, your “action set.” The coarse resolution of anger is a language one, but biologically, anger does not exist in any form you or I are familiar with.
It seems to me hard to explain why an emotion such anger might release itself when the corresponding emotion subagent gets heard in Gendlin’s Focusing if anger is not related to subagents.
That sounds to me like you are calling something anger that is not the kind of thing most people mean when they say anger.
If you burrow a word like anger to talk about something biological and the biological thing is not matching with what people mean with the term, it suggests that you should rather use a new word for the biological thing you want to talk about.