In this specific case, the disregard for volition. In the more general sense, stretching the term by analogy to describe any behavior from an agent with a significant power advantage that wouldn’t be called “Friendly” if done by an AI with a power advantage over humans.
The implicit step here, I think, is that whatever value system an FAI would have would also make a pretty good value system for any agent in a position of power, allowing for limitations of cognitive potential.
In this specific case, the disregard for volition. In the more general sense, stretching the term by analogy to describe any behavior from an agent with a significant power advantage that wouldn’t be called “Friendly” if done by an AI with a power advantage over humans.
The implicit step here, I think, is that whatever value system an FAI would have would also make a pretty good value system for any agent in a position of power, allowing for limitations of cognitive potential.