I’m not familiar with the psychological literature on emotions but its a little counter-intuitive (I think my brain is tagging it as annoying) to use the word emotions to describe all of these different tags. Maybe the process of tagging something “morally obligatory” is indistinguishable from tagging something “happy” on an fMRI but in common parlance and, I think phenomenologically, the two are different. Different enough to justify using a word other than emotion (which traditionally refers to a much smaller set of experiences). It is worth noting, for example, that we use normative terms to describe emotions- jealousy bad, love good, etc. Even though both can motivate decisions. I assume you have it that this is just the brain tagging motivations- and maybe thats right, but in that case you probably want a different word.
Also, I assume you don’t think highly of attempts to derive values from reason? I don’t think such attempts have been especially successful, but its not as if they haven’t been tried. Are all such attempts just trying to describe our feelings in logicy-sounding ways?
Lastly, am I the only one who gets nervous when we rely heavily on programming metaphors. Seems like the sort of thing that could steer us terribly wrong.
Maybe the process of tagging something “morally obligatory” is indistinguishable from tagging something “happy” on an fMRI but in common parlance and, I think phenomenologically, the two are different.
You bet… but both are going to be tagged with somatic markers that are to some extent universal… and the same term may have both negative and positive markers attached.
I think, though, that you are thinking “morally obligatory” somehow allows you to cheat and pretend that you arrived at an idea through pure reasoning, when in fact, “morally obligatory” is just a word pasted on top of a different set of somatic markers. For example, it may have the same somatic markers that another person would call “righteous indignation”, or perhaps “disgust”… or maybe even something representing “elevation” (See Haidt’s “Happiness Hypothesis”).
The fact that we put different words on the same somatic markers doesn’t magically make them pure.
OTOH, if all you meant is that “happy” is likely to be more long-lived than “morally obligatory”, I’m inclined to agree, subject to the caution that verbal labels are not somatic markers… and there exist people with negative somatic markers associated with good and happy things—for example, if they believe those things cannot be attained by them.
I’ll talk more about the relationship between somatic markers and toward/away motivation in future posts.
Are all such attempts just trying to describe our feelings in logicy-sounding ways?
I thought Eliezer had already more-or-less established this in his OB posts. In other words, yes. Human values are human values because they’re based on human feelings. And our moral reasoning is motivated reasoning… not just because it’s influenced by emotion, but also because verbal reasoning itself appears to have been evolved for the specific purpose of manipulating other people’s emotions, while defending against others’ attempts at manipulation.
I’m not familiar with the psychological literature on emotions but its a little counter-intuitive (I think my brain is tagging it as annoying) to use the word emotions to describe all of these different tags. Maybe the process of tagging something “morally obligatory” is indistinguishable from tagging something “happy” on an fMRI but in common parlance and, I think phenomenologically, the two are different. Different enough to justify using a word other than emotion (which traditionally refers to a much smaller set of experiences). It is worth noting, for example, that we use normative terms to describe emotions- jealousy bad, love good, etc. Even though both can motivate decisions. I assume you have it that this is just the brain tagging motivations- and maybe thats right, but in that case you probably want a different word.
Also, I assume you don’t think highly of attempts to derive values from reason? I don’t think such attempts have been especially successful, but its not as if they haven’t been tried. Are all such attempts just trying to describe our feelings in logicy-sounding ways?
Lastly, am I the only one who gets nervous when we rely heavily on programming metaphors. Seems like the sort of thing that could steer us terribly wrong.
You bet… but both are going to be tagged with somatic markers that are to some extent universal… and the same term may have both negative and positive markers attached.
I think, though, that you are thinking “morally obligatory” somehow allows you to cheat and pretend that you arrived at an idea through pure reasoning, when in fact, “morally obligatory” is just a word pasted on top of a different set of somatic markers. For example, it may have the same somatic markers that another person would call “righteous indignation”, or perhaps “disgust”… or maybe even something representing “elevation” (See Haidt’s “Happiness Hypothesis”).
The fact that we put different words on the same somatic markers doesn’t magically make them pure.
OTOH, if all you meant is that “happy” is likely to be more long-lived than “morally obligatory”, I’m inclined to agree, subject to the caution that verbal labels are not somatic markers… and there exist people with negative somatic markers associated with good and happy things—for example, if they believe those things cannot be attained by them.
I’ll talk more about the relationship between somatic markers and toward/away motivation in future posts.
I thought Eliezer had already more-or-less established this in his OB posts. In other words, yes. Human values are human values because they’re based on human feelings. And our moral reasoning is motivated reasoning… not just because it’s influenced by emotion, but also because verbal reasoning itself appears to have been evolved for the specific purpose of manipulating other people’s emotions, while defending against others’ attempts at manipulation.
But now I’m getting ahead of the series again.