When you say “The more new humans an AI tolerates, the higher its chances of failure”, what exactly does “failure” mean?
Or, more pointedly, would causing human extinction qualify as “failure” of an AI?
When you say “The more new humans an AI tolerates, the higher its chances of failure”, what exactly does “failure” mean?
Or, more pointedly, would causing human extinction qualify as “failure” of an AI?