Goals seem to be indeed significantly determined by emotions in humans. But this is not a defining property of something being a goal, and even in humans not a necessary way of implementing goals.
I don’t think she implies that emotions are necessary for implementing a goal—that was the point of mentioning a rationality “filter,” which can aid in accurately translating emotional desires into practical goals that best fulfill those desires, and then in translating practical goals into effective actions.
Can we trace the flow chart back to any entirely non-emotional desires/preferences? I suspect that it would quickly become a semantic issue surrounding the word “emotion.”
I don’t think she implies that emotions are necessary for implementing a goal
That phrase was primarily in reply to daenerys, not Julia.
Can we trace the flow chart back to any entirely non-emotional desires/preferences? I suspect that it would quickly become a semantic issue surrounding the word “emotion.”
What about laws of physics, or evolution? While true (if technically vague) explanations for actions, they are not true cognitive or decision theoretic or normative reasons for actions. See this post.
What about laws of physics, or evolution? While true (if technically vague) explanations for actions, they are not true cognitive reasons for actions.
“I don’t want to die,” for example, is obviously both an emotional preference and the result of the natural evolution of the brain. That the brain is an evolved organ isn’t disputed here.
Upvoting everyone. This was a really useful conversation, and I’m pretty sure I was wrong, so I definitely learned something. The evolutionary drives example was much more useful to me than the AI example. Thanks!
(Though I am still of the opinion that the speech itself was still great without the info; Due to being an introduction to the topic, I still don’t expect it to be able to cover everything. )
There are explanations of different kinds that hold simultaneously. An explanation of the wrong kind (for example, evolutionary explanation) that is only similar (because of shared reasons) to the relevant explanation (of the right kind, in this case “goals”, a normative or at least cognitive explanation) can be used to gain correct answers, used as a heuristic (evolutionary psychology has a bit of predictive power about human behavior and even goals). This further simplifies confusing them, so that instead of a rule of thumb, a source of knowledge, an explanation of the wrong kind would try taking a role that doesn’t belong to it, becoming a definition of the thing being sought. For example, “maximizing inclusive fitness” can be believed to be an actual human goal.
I don’t think she implies that emotions are necessary for implementing a goal—that was the point of mentioning a rationality “filter,” which can aid in accurately translating emotional desires into practical goals that best fulfill those desires, and then in translating practical goals into effective actions.
Can we trace the flow chart back to any entirely non-emotional desires/preferences? I suspect that it would quickly become a semantic issue surrounding the word “emotion.”
That phrase was primarily in reply to daenerys, not Julia.
What about laws of physics, or evolution? While true (if technically vague) explanations for actions, they are not true cognitive or decision theoretic or normative reasons for actions. See this post.
Upvoted for the clarification. Thanks!
“I don’t want to die,” for example, is obviously both an emotional preference and the result of the natural evolution of the brain. That the brain is an evolved organ isn’t disputed here.
Upvoting everyone. This was a really useful conversation, and I’m pretty sure I was wrong, so I definitely learned something. The evolutionary drives example was much more useful to me than the AI example. Thanks!
(Though I am still of the opinion that the speech itself was still great without the info; Due to being an introduction to the topic, I still don’t expect it to be able to cover everything. )
There are explanations of different kinds that hold simultaneously. An explanation of the wrong kind (for example, evolutionary explanation) that is only similar (because of shared reasons) to the relevant explanation (of the right kind, in this case “goals”, a normative or at least cognitive explanation) can be used to gain correct answers, used as a heuristic (evolutionary psychology has a bit of predictive power about human behavior and even goals). This further simplifies confusing them, so that instead of a rule of thumb, a source of knowledge, an explanation of the wrong kind would try taking a role that doesn’t belong to it, becoming a definition of the thing being sought. For example, “maximizing inclusive fitness” can be believed to be an actual human goal.