The more significant issue is the lack of respect for autonomy and the other individual’s goals. It is, shall we say, “unFriendly”.
It’s perfectly possible to have excellent models of other people’s psyches but no respect for their autonomy; in fact it’s a useful skill in sales and marketing. In the pathological extreme, it’s popularly called “sociopathy”.
I suggest that unFriendly is a hugely more useful general concept than “objectifying”. I often find myself frustrated I can’t use it in conversation with strangers.
The more I think about it the more I suspect that it’s actually the best description yet of the underlying complaint, at least from my perspective.
The term “objectifying” has a lot of additional implications and connotations that distract, cf. the “I objectify supermarket cashiers all the time” type remarks with the “yes but that’s not really wrong” replies.
I’d say it’s entire denotation is useless. Which explains the problems: we’re fighting over denotation when all the data is in the connotation (and ought to be extracted to stand alone).
Also, ‘unFriendly’ is supposed to be a technical term involving AI ‘behavior’, and as Eliezer points out, it’s hard to see how it applies to human behavior.
“UnFriendly” is supposed to be a technical term covering a tremendous range of AIs. What do you mean by it in this context? Flawed fun theory? Disregard for volition?
In this specific case, the disregard for volition. In the more general sense, stretching the term by analogy to describe any behavior from an agent with a significant power advantage that wouldn’t be called “Friendly” if done by an AI with a power advantage over humans.
The implicit step here, I think, is that whatever value system an FAI would have would also make a pretty good value system for any agent in a position of power, allowing for limitations of cognitive potential.
The more significant issue is the lack of respect for autonomy and the other individual’s goals. It is, shall we say, “unFriendly”.
It’s perfectly possible to have excellent models of other people’s psyches but no respect for their autonomy; in fact it’s a useful skill in sales and marketing. In the pathological extreme, it’s popularly called “sociopathy”.
I suggest that unFriendly is a hugely more useful general concept than “objectifying”. I often find myself frustrated I can’t use it in conversation with strangers.
The more I think about it the more I suspect that it’s actually the best description yet of the underlying complaint, at least from my perspective.
The term “objectifying” has a lot of additional implications and connotations that distract, cf. the “I objectify supermarket cashiers all the time” type remarks with the “yes but that’s not really wrong” replies.
I’d say it’s entire denotation is useless. Which explains the problems: we’re fighting over denotation when all the data is in the connotation (and ought to be extracted to stand alone).
“unFriendly” is the more general concept, but I think “objectifying” is still an important special case.
Also, ‘unFriendly’ is supposed to be a technical term involving AI ‘behavior’, and as Eliezer points out, it’s hard to see how it applies to human behavior.
Right—the human concept is good ol’ “unfriendly”, no CamelCase.
“UnFriendly” is supposed to be a technical term covering a tremendous range of AIs. What do you mean by it in this context? Flawed fun theory? Disregard for volition?
In this specific case, the disregard for volition. In the more general sense, stretching the term by analogy to describe any behavior from an agent with a significant power advantage that wouldn’t be called “Friendly” if done by an AI with a power advantage over humans.
The implicit step here, I think, is that whatever value system an FAI would have would also make a pretty good value system for any agent in a position of power, allowing for limitations of cognitive potential.
Mostly disregard for volition, but also satisficing too early on fun.