Thanks for pointing out. I was very unclear there.
Julia says that generally speaking, emotions are where goals come from.
She never explicitly says for humans, but I think that that is implied by the fact that she doesn’t mention AI, etc.
If I understand Vladimir’s (that’s my dad’s name, by the way!) argument, he is saying that he disagrees that goals have to come from emotions (especially in non-humans), and he dislikes this statement because then people assume that AI’s must necessarily have emotions in order to function.
Now that you mention it, I guess I do agree with his facts (goals don’t necessarily come from emotion in non-humans, and if people thought that then they would think AI’s need emotion), but I do disagree that that needed to be a point in the talk. So what I disagree with is the fact that he takes issue with the talk for not including information that, while true, wasn’t needed in the talk.
tl;dr- Vladimir says “you didn’t include the non-human exception to the rule in the talk”. I say “the non-human exception may be true, but isn’t needed in the talk”
If I understand Vladimir’s (that’s my dad’s name, by the way!) argument, he is saying that he disagrees that goals have to come from emotions (especially in non-humans), and he dislikes this statement because then people assume that AI’s must necessarily have emotions in order to function.
This isn’t my point, or even particularly important for my point (edited to clarify the incidental nature of my AI remark). Goals seem to be indeed significantly determined by emotions in humans. But this is not a defining property of something being a goal, and even in humans not a necessary way of implementing goals. You can do something despite not wanting to; a belief would have explanatory power in that case, since the drives that allow you to be driven by beliefs are present unconditionally, whether you have beliefs driving you to action against other emotions or not.
The heuristic of taking cues from your emotions that aren’t known to be instrumentally or epistemically problematic is just that, a heuristic. It shouldn’t be assigned the status of a fundamental principle that defines what a “goal” is, which is the connotational impression I got from the talk, and which is what I object to, leaving aside Julia’s actual philosophical position on the question.
Goals seem to be indeed significantly determined by emotions in humans. But this is not a defining property of something being a goal, and even in humans not a necessary way of implementing goals.
I don’t think she implies that emotions are necessary for implementing a goal—that was the point of mentioning a rationality “filter,” which can aid in accurately translating emotional desires into practical goals that best fulfill those desires, and then in translating practical goals into effective actions.
Can we trace the flow chart back to any entirely non-emotional desires/preferences? I suspect that it would quickly become a semantic issue surrounding the word “emotion.”
I don’t think she implies that emotions are necessary for implementing a goal
That phrase was primarily in reply to daenerys, not Julia.
Can we trace the flow chart back to any entirely non-emotional desires/preferences? I suspect that it would quickly become a semantic issue surrounding the word “emotion.”
What about laws of physics, or evolution? While true (if technically vague) explanations for actions, they are not true cognitive or decision theoretic or normative reasons for actions. See this post.
What about laws of physics, or evolution? While true (if technically vague) explanations for actions, they are not true cognitive reasons for actions.
“I don’t want to die,” for example, is obviously both an emotional preference and the result of the natural evolution of the brain. That the brain is an evolved organ isn’t disputed here.
Upvoting everyone. This was a really useful conversation, and I’m pretty sure I was wrong, so I definitely learned something. The evolutionary drives example was much more useful to me than the AI example. Thanks!
(Though I am still of the opinion that the speech itself was still great without the info; Due to being an introduction to the topic, I still don’t expect it to be able to cover everything. )
There are explanations of different kinds that hold simultaneously. An explanation of the wrong kind (for example, evolutionary explanation) that is only similar (because of shared reasons) to the relevant explanation (of the right kind, in this case “goals”, a normative or at least cognitive explanation) can be used to gain correct answers, used as a heuristic (evolutionary psychology has a bit of predictive power about human behavior and even goals). This further simplifies confusing them, so that instead of a rule of thumb, a source of knowledge, an explanation of the wrong kind would try taking a role that doesn’t belong to it, becoming a definition of the thing being sought. For example, “maximizing inclusive fitness” can be believed to be an actual human goal.
Thanks for pointing out. I was very unclear there.
Julia says that generally speaking, emotions are where goals come from.
She never explicitly says for humans, but I think that that is implied by the fact that she doesn’t mention AI, etc.
If I understand Vladimir’s (that’s my dad’s name, by the way!) argument, he is saying that he disagrees that goals have to come from emotions (especially in non-humans), and he dislikes this statement because then people assume that AI’s must necessarily have emotions in order to function.
Now that you mention it, I guess I do agree with his facts (goals don’t necessarily come from emotion in non-humans, and if people thought that then they would think AI’s need emotion), but I do disagree that that needed to be a point in the talk. So what I disagree with is the fact that he takes issue with the talk for not including information that, while true, wasn’t needed in the talk.
tl;dr- Vladimir says “you didn’t include the non-human exception to the rule in the talk”. I say “the non-human exception may be true, but isn’t needed in the talk”
This isn’t my point, or even particularly important for my point (edited to clarify the incidental nature of my AI remark). Goals seem to be indeed significantly determined by emotions in humans. But this is not a defining property of something being a goal, and even in humans not a necessary way of implementing goals. You can do something despite not wanting to; a belief would have explanatory power in that case, since the drives that allow you to be driven by beliefs are present unconditionally, whether you have beliefs driving you to action against other emotions or not.
The heuristic of taking cues from your emotions that aren’t known to be instrumentally or epistemically problematic is just that, a heuristic. It shouldn’t be assigned the status of a fundamental principle that defines what a “goal” is, which is the connotational impression I got from the talk, and which is what I object to, leaving aside Julia’s actual philosophical position on the question.
I don’t think she implies that emotions are necessary for implementing a goal—that was the point of mentioning a rationality “filter,” which can aid in accurately translating emotional desires into practical goals that best fulfill those desires, and then in translating practical goals into effective actions.
Can we trace the flow chart back to any entirely non-emotional desires/preferences? I suspect that it would quickly become a semantic issue surrounding the word “emotion.”
That phrase was primarily in reply to daenerys, not Julia.
What about laws of physics, or evolution? While true (if technically vague) explanations for actions, they are not true cognitive or decision theoretic or normative reasons for actions. See this post.
Upvoted for the clarification. Thanks!
“I don’t want to die,” for example, is obviously both an emotional preference and the result of the natural evolution of the brain. That the brain is an evolved organ isn’t disputed here.
Upvoting everyone. This was a really useful conversation, and I’m pretty sure I was wrong, so I definitely learned something. The evolutionary drives example was much more useful to me than the AI example. Thanks!
(Though I am still of the opinion that the speech itself was still great without the info; Due to being an introduction to the topic, I still don’t expect it to be able to cover everything. )
There are explanations of different kinds that hold simultaneously. An explanation of the wrong kind (for example, evolutionary explanation) that is only similar (because of shared reasons) to the relevant explanation (of the right kind, in this case “goals”, a normative or at least cognitive explanation) can be used to gain correct answers, used as a heuristic (evolutionary psychology has a bit of predictive power about human behavior and even goals). This further simplifies confusing them, so that instead of a rule of thumb, a source of knowledge, an explanation of the wrong kind would try taking a role that doesn’t belong to it, becoming a definition of the thing being sought. For example, “maximizing inclusive fitness” can be believed to be an actual human goal.