“UnFriendly” is supposed to be a technical term covering a tremendous range of AIs. What do you mean by it in this context? Flawed fun theory? Disregard for volition?
In this specific case, the disregard for volition. In the more general sense, stretching the term by analogy to describe any behavior from an agent with a significant power advantage that wouldn’t be called “Friendly” if done by an AI with a power advantage over humans.
The implicit step here, I think, is that whatever value system an FAI would have would also make a pretty good value system for any agent in a position of power, allowing for limitations of cognitive potential.
“UnFriendly” is supposed to be a technical term covering a tremendous range of AIs. What do you mean by it in this context? Flawed fun theory? Disregard for volition?
In this specific case, the disregard for volition. In the more general sense, stretching the term by analogy to describe any behavior from an agent with a significant power advantage that wouldn’t be called “Friendly” if done by an AI with a power advantage over humans.
The implicit step here, I think, is that whatever value system an FAI would have would also make a pretty good value system for any agent in a position of power, allowing for limitations of cognitive potential.
Mostly disregard for volition, but also satisficing too early on fun.