Emotions are clearly necessary for forming the goals, rationality is simply lame without them.
What does this mean?
a) Emotions are logically necessary for forming goals, rational beings are incapacitated without emotions. b) Emotions are logically necessary for forming goals, rational beings are incapacitated without goals. c) Emotions are logically necessary for forming goals, rationality has no normative value to a rational being without emotions. d) Emotions are logically necessary for forming goals, rationality has no normative value to a rational being without goals. e) Emotions are necessary for forming goals among humans, rational humans are incapacitated without emotions. f) Emotions are necessary for forming goals among humans, rational humans are incapacitated without goals. g) Emotions are necessary for forming goals among humans, rationality has no normative value to humans without emotions. h) Emotions are necessary for forming goals among humans, rationality has no normative value to humans without goals. i) (Other.)
Yay! Word of God on the issue! (Warning: TvTropes). Good to know I wasn’t too far off-base.
I can see how g and h can be considered equivalent using the: emotions-> goals . In fact I would assume that would also make a and b pretty much equivalent, as well as c and d, e and f, etc.
Incidentally, the filmmaker didn’t capture my slide with the diagram of the revised model of rationality and emotions in ideal human* decision-making, so I’ve uploaded it.
If emotions are necessary but not sufficient for forming goals among humans, the claim might be that rationality has no normative value to humans without goals without addressing rationality’s normative value to humans with emotions who don’t have goals.
If you see them as equivalent, this implies that you believe emotions are necessary and sufficient for forming goals among humans.
As much as this might be true for humans, it would be strange to say that after goals are formed, the loss of emotion in a person would obviate all their already formed non-emotional goals. So it’s not just that you’re discussing the human case and not the AI case, you’re discussing the typical human.
From the context of her talk I have a high confidence that the “them” at the end of her sentence refers to emotions, not goals. Therefore I would reject translations b, d, f, and h.
I would also reject a as being far too reaching for the level of her talk.
Also from the context of her talk I would say that the “normative value” translations are much more likely than the “incapacitated” translations. My confidence in this is much lower than my confidence in my first assertion though.
That leaves us with c, g, and other. I’ve already argued that I think her talk was implied to be about human rationality, leaving us with g, or other.
Can’t think of a better option, so my personal opinion is g.
What does this mean?
a) Emotions are logically necessary for forming goals, rational beings are incapacitated without emotions.
b) Emotions are logically necessary for forming goals, rational beings are incapacitated without goals.
c) Emotions are logically necessary for forming goals, rationality has no normative value to a rational being without emotions.
d) Emotions are logically necessary for forming goals, rationality has no normative value to a rational being without goals.
e) Emotions are necessary for forming goals among humans, rational humans are incapacitated without emotions.
f) Emotions are necessary for forming goals among humans, rational humans are incapacitated without goals.
g) Emotions are necessary for forming goals among humans, rationality has no normative value to humans without emotions.
h) Emotions are necessary for forming goals among humans, rationality has no normative value to humans without goals.
i) (Other.)
Good question. My intended meaning was closest to (h). (Although isn’t (g) pretty much equivalent?)
Yay! Word of God on the issue! (Warning: TvTropes). Good to know I wasn’t too far off-base.
I can see how g and h can be considered equivalent using the: emotions-> goals . In fact I would assume that would also make a and b pretty much equivalent, as well as c and d, e and f, etc.
Incidentally, the filmmaker didn’t capture my slide with the diagram of the revised model of rationality and emotions in ideal human* decision-making, so I’ve uploaded it.
The Straw Vulcan model of ideal human* decisionmaking: http://measureofdoubt.files.wordpress.com/2011/11/screen-shot-2011-11-26-at-3-58-00-pm.png
My revised model of ideal human* decisionmaking: http://measureofdoubt.files.wordpress.com/2011/11/screen-shot-2011-11-26-at-3-58-14-pm.png
*I realize now that I need this modifier, at least on Less Wrong!
If emotions are necessary but not sufficient for forming goals among humans, the claim might be that rationality has no normative value to humans without goals without addressing rationality’s normative value to humans with emotions who don’t have goals.
If you see them as equivalent, this implies that you believe emotions are necessary and sufficient for forming goals among humans.
As much as this might be true for humans, it would be strange to say that after goals are formed, the loss of emotion in a person would obviate all their already formed non-emotional goals. So it’s not just that you’re discussing the human case and not the AI case, you’re discussing the typical human.
.
From the context of her talk I have a high confidence that the “them” at the end of her sentence refers to emotions, not goals. Therefore I would reject translations b, d, f, and h.
I would also reject a as being far too reaching for the level of her talk.
Also from the context of her talk I would say that the “normative value” translations are much more likely than the “incapacitated” translations. My confidence in this is much lower than my confidence in my first assertion though.
That leaves us with c, g, and other. I’ve already argued that I think her talk was implied to be about human rationality, leaving us with g, or other.
Can’t think of a better option, so my personal opinion is g.