Yay! Word of God on the issue! (Warning: TvTropes). Good to know I wasn’t too far off-base.
I can see how g and h can be considered equivalent using the: emotions-> goals . In fact I would assume that would also make a and b pretty much equivalent, as well as c and d, e and f, etc.
Incidentally, the filmmaker didn’t capture my slide with the diagram of the revised model of rationality and emotions in ideal human* decision-making, so I’ve uploaded it.
If emotions are necessary but not sufficient for forming goals among humans, the claim might be that rationality has no normative value to humans without goals without addressing rationality’s normative value to humans with emotions who don’t have goals.
If you see them as equivalent, this implies that you believe emotions are necessary and sufficient for forming goals among humans.
As much as this might be true for humans, it would be strange to say that after goals are formed, the loss of emotion in a person would obviate all their already formed non-emotional goals. So it’s not just that you’re discussing the human case and not the AI case, you’re discussing the typical human.
Good question. My intended meaning was closest to (h). (Although isn’t (g) pretty much equivalent?)
Yay! Word of God on the issue! (Warning: TvTropes). Good to know I wasn’t too far off-base.
I can see how g and h can be considered equivalent using the: emotions-> goals . In fact I would assume that would also make a and b pretty much equivalent, as well as c and d, e and f, etc.
Incidentally, the filmmaker didn’t capture my slide with the diagram of the revised model of rationality and emotions in ideal human* decision-making, so I’ve uploaded it.
The Straw Vulcan model of ideal human* decisionmaking: http://measureofdoubt.files.wordpress.com/2011/11/screen-shot-2011-11-26-at-3-58-00-pm.png
My revised model of ideal human* decisionmaking: http://measureofdoubt.files.wordpress.com/2011/11/screen-shot-2011-11-26-at-3-58-14-pm.png
*I realize now that I need this modifier, at least on Less Wrong!
If emotions are necessary but not sufficient for forming goals among humans, the claim might be that rationality has no normative value to humans without goals without addressing rationality’s normative value to humans with emotions who don’t have goals.
If you see them as equivalent, this implies that you believe emotions are necessary and sufficient for forming goals among humans.
As much as this might be true for humans, it would be strange to say that after goals are formed, the loss of emotion in a person would obviate all their already formed non-emotional goals. So it’s not just that you’re discussing the human case and not the AI case, you’re discussing the typical human.
.