One problem with (11) is that for the threat to be plausible, the AI has to assume:
a) Humans know so little that we have to resort to questionable “tests” like this of AI safety.
b) Humans know so much that we can afford for our AI safety tests to simulate interactions with an entire universe full of sentients.
The AI version of Pascal’s Wager seems to be much like the human version, only even sillier.
How large is the simulated universe? The AI only knows about the computing capacity that is simulated, and has no information about the nature of that which is simulating that world.
One problem with (11) is that for the threat to be plausible, the AI has to assume:
a) Humans know so little that we have to resort to questionable “tests” like this of AI safety.
b) Humans know so much that we can afford for our AI safety tests to simulate interactions with an entire universe full of sentients.
The AI version of Pascal’s Wager seems to be much like the human version, only even sillier.
How large is the simulated universe? The AI only knows about the computing capacity that is simulated, and has no information about the nature of that which is simulating that world.