Uh oh. How did ‘should’ sneak its way into our discussion? I’m just talking about positive accounts of human motivation.
I guess the objection I have is to calling the behavioral summary “motivation”, a term that has normative connotations (similarly, “value”, “desire”, “wants”, etc.). Asking “Do we really want X?” (as in, does a positive account of some notion of “wanting” say that we “want” X, to the best of our scientific knowledge) sounds too similar to asking “Should we pursue X?” or even “Can we pursue X?”, but is a largely unrelated question with similarly unrelated answers.
I’m using these terms the way they are standardly used in the literature. If you object to the common usage, perhaps you could just read my articles with the assumption that I’m using these words the way neuroscientists and psychologists do, and then state your concerns about the standard language in the comments? I can’t rewrite my articles for each reader who has their own peculiar language preferences...
The real question is, do you agree with my characterization of the intended meaning of these intentionality-scented words (as used in particularly this article, say) as being mostly unrelated to normativity, that is to FAI-grade machine ethics? It is unclear to me if you agree or not. If there is some connection, what is it? It is also unclear to me how confusing or clear this question appears to other readers.
(On the other hand, who or what bears the blame for my (or others’) peculiar confusions is uninteresting.)
I don’t recall bringing up the issue of blame. All I’m saying is that I don’t have time to write a separate version of each post to accomodate each person’s language preferences, so I’m usually going to use the standard language used by the researchers in the field I’m discussing.
Words like ‘motivation’, ‘value’, ‘desire’, ‘want’ don’t have normative connotations in my head when I’m discussing them in the context of descriptivist neuroscience. The connotations in your brain may vary. I’m trying to discuss merely descriptive issues; I intend to start using descriptive facts to solve normative problems later. For now, I want to focus on getting a correct descriptive understanding of the system that causes humans do what they do before applying that knowledge to normative questions about what humans should do or what a Friendly AI should do.
Yes, it clarifies your intended meaning for the words, and resolves my confusion (for the second time; better watch for the confusing influence of those connotations in the future).
(I’m still deeply skeptical that descriptive understanding can help with FAI, but this is mostly unrelated to this post and others, which are good LW material when not confused for discussion of normativity.)
How would you (descriptively, “from the outside”) explain the fact that you didn’t provide information that would resolve my confusion (that you provided now), and instead pointed out that the reason for your actions lies in a tradition (conventional usage), and that I should engage that tradition directly? It seems like you were moved by considerations of assignment of blame (or responsibility), specifically you directed my attention to the process responsible for the problem. (I don’t expect you thought of this so explicitly, but still something caused your response to go the way it went.)
Blame is condemnation useful in shaping the future. It’s not latent in who had the best opportunity to avoid a problem, or the last clear chance to avoid a problem, or who began a problem, etc.
Responsibility is something political beings invent to relate agents to causation.
When people talk about causation they’re not necessarily playing that game.
Hmmm. I’m sorry you took it that way. I’m starting to get the sense that perhaps you see more connotations of normativity and judgment in general, and I try to see the world through the lens of a descriptivist project by default except for those rare occasions when I’m going to take a dangerous leap into the confusing lands of normativity.
How would you… explain the fact that you didn’t provide information that would resolve my confusion… and instead pointed out that the reason for your actions lies in a tradition...
I didn’t know which information would resolve your confusion until after I stumbled upon it. The point about common usage merely meant to explain why I’m using the terms the way I am.
I guess the objection I have is to calling the behavioral summary “motivation”, a term that has normative connotations (similarly, “value”, “desire”, “wants”, etc.). Asking “Do we really want X?” (as in, does a positive account of some notion of “wanting” say that we “want” X, to the best of our scientific knowledge) sounds too similar to asking “Should we pursue X?” or even “Can we pursue X?”, but is a largely unrelated question with similarly unrelated answers.
I’m using these terms the way they are standardly used in the literature. If you object to the common usage, perhaps you could just read my articles with the assumption that I’m using these words the way neuroscientists and psychologists do, and then state your concerns about the standard language in the comments? I can’t rewrite my articles for each reader who has their own peculiar language preferences...
The real question is, do you agree with my characterization of the intended meaning of these intentionality-scented words (as used in particularly this article, say) as being mostly unrelated to normativity, that is to FAI-grade machine ethics? It is unclear to me if you agree or not. If there is some connection, what is it? It is also unclear to me how confusing or clear this question appears to other readers.
(On the other hand, who or what bears the blame for my (or others’) peculiar confusions is uninteresting.)
I don’t recall bringing up the issue of blame. All I’m saying is that I don’t have time to write a separate version of each post to accomodate each person’s language preferences, so I’m usually going to use the standard language used by the researchers in the field I’m discussing.
Words like ‘motivation’, ‘value’, ‘desire’, ‘want’ don’t have normative connotations in my head when I’m discussing them in the context of descriptivist neuroscience. The connotations in your brain may vary. I’m trying to discuss merely descriptive issues; I intend to start using descriptive facts to solve normative problems later. For now, I want to focus on getting a correct descriptive understanding of the system that causes humans do what they do before applying that knowledge to normative questions about what humans should do or what a Friendly AI should do.
Does that make sense?
Yes, it clarifies your intended meaning for the words, and resolves my confusion (for the second time; better watch for the confusing influence of those connotations in the future).
(I’m still deeply skeptical that descriptive understanding can help with FAI, but this is mostly unrelated to this post and others, which are good LW material when not confused for discussion of normativity.)
How would you (descriptively, “from the outside”) explain the fact that you didn’t provide information that would resolve my confusion (that you provided now), and instead pointed out that the reason for your actions lies in a tradition (conventional usage), and that I should engage that tradition directly? It seems like you were moved by considerations of assignment of blame (or responsibility), specifically you directed my attention to the process responsible for the problem. (I don’t expect you thought of this so explicitly, but still something caused your response to go the way it went.)
I don’t think blame works the way you seem to.
Blame is condemnation useful in shaping the future. It’s not latent in who had the best opportunity to avoid a problem, or the last clear chance to avoid a problem, or who began a problem, etc.
Responsibility is something political beings invent to relate agents to causation.
When people talk about causation they’re not necessarily playing that game.
Hmmm. I’m sorry you took it that way. I’m starting to get the sense that perhaps you see more connotations of normativity and judgment in general, and I try to see the world through the lens of a descriptivist project by default except for those rare occasions when I’m going to take a dangerous leap into the confusing lands of normativity.
I didn’t know which information would resolve your confusion until after I stumbled upon it. The point about common usage merely meant to explain why I’m using the terms the way I am.