It seems obvious that if the AI has the capacity to torture trillions of people inside the box, it would have the capacity to torture *illions outside the box.
If EY is right, most failures of friendliness will produce an AI uninterested in torture for its own sake. It might try the same trick to escape to the universe simulating this one, but that seems unlikely for a number of reasons. (Edit: I haven’t thought about it blackmailing aliens or alien FAIs.)
It seems obvious that if the AI has the capacity to torture trillions of people inside the box, it would have the capacity to torture *illions outside the box.
If EY is right, most failures of friendliness will produce an AI uninterested in torture for its own sake. It might try the same trick to escape to the universe simulating this one, but that seems unlikely for a number of reasons. (Edit: I haven’t thought about it blackmailing aliens or alien FAIs.)