The only reason that you have goals is because you have emotions, because you care about some outcomes of the world’s more than others, because you feel positively about some potential outcomes and negatively about other potential outcomes. If you really didn’t care about any potential state of the world more or less than other potential state of the world, it wouldn’t matter how skilled your reasoning abilities were, you’d never have reason to do anything. [...] Emotions are clearly necessary for forming the goals, rationality is simply lame without them. [...] I would say that you can’t be wrong about what you want.
This walks a fine line between naturalistic fallacy, taking goals outside System 2′s domain, declaring reflection on own goals or improvement on System 1′s recommendations impossible, acceptance of emotion’s recommendations a sacred fact of mysterious origin; and giving a descriptive account of human cognition, similar in this non-normative role to evolutionary psychology. (Incidentally, this is the kind of thinking (or framing of thoughts) that leads people to declare that AIs must necessarily have emotions, or they’d just sit there, doing nothing.)
Emotions can be instrumentally and epistemically irrational, and using rationality is what helps us recognize that and shape our goals based not on what our automatic emotional desires are, but on what our rationality-filtered emotional desires are.
This still leaves the status quo justification in place, with personal emotion being the source of purpose with some exceptions.
I disagree. Not all rationality deals with AI and the Singularity. I know the stuff I prefer reading about doesn’t, and I thought it was pretty obvious that this talk doesn’t as well. So yes, perhaps some super-intelligence wouldn’t need emotions to have goals, but humans do. And it is implied that Julia is talking about human rationality in her speech.
It’s like when you first learn physics, and they say “Think of the world this way. It’s not actually this way, but for what you need to learn about right now, that’s the best model to use”
Especially in an introductory course you can’t get anywhere if you are so bogged down with trying to explain all the exceptions to the rules. Sometimes you’ve got to generalize.
This is the kind of thinking (or framing of thoughts) that leads people to declare that AIs must necessarily have emotions, or they’d just sit there, doing nothing.
I disagree. Not all rationality deals with AI and the Singularity.
It’s not clear what you disagree with, but the point was that a way of thinking about goals led to a wrong conclusion. The AI example was an attempt to show the assertion is wrong, but there is no telling what mischief a wrong thought will ultimately lead to.
The point was not that talks on human rationality should deal with AI or a singularity.
I don’t necessarily agree with Vladmir_Nesov’s comment, it depends on the interpretation an ambiguous quote he cited, and I haven’t heard this speech.
Thanks for pointing out. I was very unclear there.
Julia says that generally speaking, emotions are where goals come from.
She never explicitly says for humans, but I think that that is implied by the fact that she doesn’t mention AI, etc.
If I understand Vladimir’s (that’s my dad’s name, by the way!) argument, he is saying that he disagrees that goals have to come from emotions (especially in non-humans), and he dislikes this statement because then people assume that AI’s must necessarily have emotions in order to function.
Now that you mention it, I guess I do agree with his facts (goals don’t necessarily come from emotion in non-humans, and if people thought that then they would think AI’s need emotion), but I do disagree that that needed to be a point in the talk. So what I disagree with is the fact that he takes issue with the talk for not including information that, while true, wasn’t needed in the talk.
tl;dr- Vladimir says “you didn’t include the non-human exception to the rule in the talk”. I say “the non-human exception may be true, but isn’t needed in the talk”
If I understand Vladimir’s (that’s my dad’s name, by the way!) argument, he is saying that he disagrees that goals have to come from emotions (especially in non-humans), and he dislikes this statement because then people assume that AI’s must necessarily have emotions in order to function.
This isn’t my point, or even particularly important for my point (edited to clarify the incidental nature of my AI remark). Goals seem to be indeed significantly determined by emotions in humans. But this is not a defining property of something being a goal, and even in humans not a necessary way of implementing goals. You can do something despite not wanting to; a belief would have explanatory power in that case, since the drives that allow you to be driven by beliefs are present unconditionally, whether you have beliefs driving you to action against other emotions or not.
The heuristic of taking cues from your emotions that aren’t known to be instrumentally or epistemically problematic is just that, a heuristic. It shouldn’t be assigned the status of a fundamental principle that defines what a “goal” is, which is the connotational impression I got from the talk, and which is what I object to, leaving aside Julia’s actual philosophical position on the question.
Goals seem to be indeed significantly determined by emotions in humans. But this is not a defining property of something being a goal, and even in humans not a necessary way of implementing goals.
I don’t think she implies that emotions are necessary for implementing a goal—that was the point of mentioning a rationality “filter,” which can aid in accurately translating emotional desires into practical goals that best fulfill those desires, and then in translating practical goals into effective actions.
Can we trace the flow chart back to any entirely non-emotional desires/preferences? I suspect that it would quickly become a semantic issue surrounding the word “emotion.”
I don’t think she implies that emotions are necessary for implementing a goal
That phrase was primarily in reply to daenerys, not Julia.
Can we trace the flow chart back to any entirely non-emotional desires/preferences? I suspect that it would quickly become a semantic issue surrounding the word “emotion.”
What about laws of physics, or evolution? While true (if technically vague) explanations for actions, they are not true cognitive or decision theoretic or normative reasons for actions. See this post.
What about laws of physics, or evolution? While true (if technically vague) explanations for actions, they are not true cognitive reasons for actions.
“I don’t want to die,” for example, is obviously both an emotional preference and the result of the natural evolution of the brain. That the brain is an evolved organ isn’t disputed here.
Upvoting everyone. This was a really useful conversation, and I’m pretty sure I was wrong, so I definitely learned something. The evolutionary drives example was much more useful to me than the AI example. Thanks!
(Though I am still of the opinion that the speech itself was still great without the info; Due to being an introduction to the topic, I still don’t expect it to be able to cover everything. )
There are explanations of different kinds that hold simultaneously. An explanation of the wrong kind (for example, evolutionary explanation) that is only similar (because of shared reasons) to the relevant explanation (of the right kind, in this case “goals”, a normative or at least cognitive explanation) can be used to gain correct answers, used as a heuristic (evolutionary psychology has a bit of predictive power about human behavior and even goals). This further simplifies confusing them, so that instead of a rule of thumb, a source of knowledge, an explanation of the wrong kind would try taking a role that doesn’t belong to it, becoming a definition of the thing being sought. For example, “maximizing inclusive fitness” can be believed to be an actual human goal.
I take issue with this section [31:57]:
This walks a fine line between naturalistic fallacy, taking goals outside System 2′s domain, declaring reflection on own goals or improvement on System 1′s recommendations impossible, acceptance of emotion’s recommendations a sacred fact of mysterious origin; and giving a descriptive account of human cognition, similar in this non-normative role to evolutionary psychology. (Incidentally, this is the kind of thinking (or framing of thoughts) that leads people to declare that AIs must necessarily have emotions, or they’d just sit there, doing nothing.)
This is partially retracted later, based on ideas of distinction between terminal and instrumental goals; and rejection of emotion that’s known to be caused by a false belief [38:26]:
This still leaves the status quo justification in place, with personal emotion being the source of purpose with some exceptions.
I disagree. Not all rationality deals with AI and the Singularity. I know the stuff I prefer reading about doesn’t, and I thought it was pretty obvious that this talk doesn’t as well. So yes, perhaps some super-intelligence wouldn’t need emotions to have goals, but humans do. And it is implied that Julia is talking about human rationality in her speech.
It’s like when you first learn physics, and they say “Think of the world this way. It’s not actually this way, but for what you need to learn about right now, that’s the best model to use”
Especially in an introductory course you can’t get anywhere if you are so bogged down with trying to explain all the exceptions to the rules. Sometimes you’ve got to generalize.
The goal of preparing a good explanation is not served best by making these particular mistaken claims.
It’s not clear what you disagree with, but the point was that a way of thinking about goals led to a wrong conclusion. The AI example was an attempt to show the assertion is wrong, but there is no telling what mischief a wrong thought will ultimately lead to.
The point was not that talks on human rationality should deal with AI or a singularity.
I don’t necessarily agree with Vladmir_Nesov’s comment, it depends on the interpretation an ambiguous quote he cited, and I haven’t heard this speech.
Thanks for pointing out. I was very unclear there.
Julia says that generally speaking, emotions are where goals come from.
She never explicitly says for humans, but I think that that is implied by the fact that she doesn’t mention AI, etc.
If I understand Vladimir’s (that’s my dad’s name, by the way!) argument, he is saying that he disagrees that goals have to come from emotions (especially in non-humans), and he dislikes this statement because then people assume that AI’s must necessarily have emotions in order to function.
Now that you mention it, I guess I do agree with his facts (goals don’t necessarily come from emotion in non-humans, and if people thought that then they would think AI’s need emotion), but I do disagree that that needed to be a point in the talk. So what I disagree with is the fact that he takes issue with the talk for not including information that, while true, wasn’t needed in the talk.
tl;dr- Vladimir says “you didn’t include the non-human exception to the rule in the talk”. I say “the non-human exception may be true, but isn’t needed in the talk”
This isn’t my point, or even particularly important for my point (edited to clarify the incidental nature of my AI remark). Goals seem to be indeed significantly determined by emotions in humans. But this is not a defining property of something being a goal, and even in humans not a necessary way of implementing goals. You can do something despite not wanting to; a belief would have explanatory power in that case, since the drives that allow you to be driven by beliefs are present unconditionally, whether you have beliefs driving you to action against other emotions or not.
The heuristic of taking cues from your emotions that aren’t known to be instrumentally or epistemically problematic is just that, a heuristic. It shouldn’t be assigned the status of a fundamental principle that defines what a “goal” is, which is the connotational impression I got from the talk, and which is what I object to, leaving aside Julia’s actual philosophical position on the question.
I don’t think she implies that emotions are necessary for implementing a goal—that was the point of mentioning a rationality “filter,” which can aid in accurately translating emotional desires into practical goals that best fulfill those desires, and then in translating practical goals into effective actions.
Can we trace the flow chart back to any entirely non-emotional desires/preferences? I suspect that it would quickly become a semantic issue surrounding the word “emotion.”
That phrase was primarily in reply to daenerys, not Julia.
What about laws of physics, or evolution? While true (if technically vague) explanations for actions, they are not true cognitive or decision theoretic or normative reasons for actions. See this post.
Upvoted for the clarification. Thanks!
“I don’t want to die,” for example, is obviously both an emotional preference and the result of the natural evolution of the brain. That the brain is an evolved organ isn’t disputed here.
Upvoting everyone. This was a really useful conversation, and I’m pretty sure I was wrong, so I definitely learned something. The evolutionary drives example was much more useful to me than the AI example. Thanks!
(Though I am still of the opinion that the speech itself was still great without the info; Due to being an introduction to the topic, I still don’t expect it to be able to cover everything. )
There are explanations of different kinds that hold simultaneously. An explanation of the wrong kind (for example, evolutionary explanation) that is only similar (because of shared reasons) to the relevant explanation (of the right kind, in this case “goals”, a normative or at least cognitive explanation) can be used to gain correct answers, used as a heuristic (evolutionary psychology has a bit of predictive power about human behavior and even goals). This further simplifies confusing them, so that instead of a rule of thumb, a source of knowledge, an explanation of the wrong kind would try taking a role that doesn’t belong to it, becoming a definition of the thing being sought. For example, “maximizing inclusive fitness” can be believed to be an actual human goal.