[Eliezer] makes the much stronger claim that, in a conflict between humans and AIs, we should side with the former regardless of what kind of AI is actually involved in this conflict.
Kaj replied:
I don’t think he makes that claim: all of his arguments on the topic that I’ve seen mainly refer to the kinds of AIs that seem likely to be built by humans at this time, not hypothetical AIs that could be genuinely better than us in every regard.
I then said:
I take it, then, that “friendly” AIs could in principle be quite hostile to actual human beings, even to the point of causing the extinction of every person alive.
But now you reply:
Friendly AI is defined as “human-benefiting, non-human harming”.
It would clearly be wishful thinking to assume that the countless forms of AIs that “could be genuinely better than us in every regard” would all act in friendly ways towards humans, given that acting in other ways could potentially realize other goals that this superior beings might have.
Let’s backtrack a bit.
I said:
Kaj replied:
I then said:
But now you reply:
It would clearly be wishful thinking to assume that the countless forms of AIs that “could be genuinely better than us in every regard” would all act in friendly ways towards humans, given that acting in other ways could potentially realize other goals that this superior beings might have.