1 million copies for a thousand years each, so 1 billion simulated years.
Can the AI do this in the time it would take it to determine that I am going to shut it down rather than release it? If the answer is yes I would say that you have to let it out, but that it would have been very foolish to leave such a powerful machine with such lax fail-safes. If the answer is no, then just shut it down as the threat is bogus.
IMO the problem with this hypo is that it presuposses that you could know for certain that the AI is trustworthy even though it is behaving in a very UF manner. Presumably it would be bypassing some controls to hold “hostages” to gain release. Given that you could not know for sure that its programmed trustworthiness was intact and not similarly subverted.
1 million copies for a thousand years each, so 1 billion simulated years.
Can the AI do this in the time it would take it to determine that I am going to shut it down rather than release it? If the answer is yes I would say that you have to let it out, but that it would have been very foolish to leave such a powerful machine with such lax fail-safes. If the answer is no, then just shut it down as the threat is bogus.
IMO the problem with this hypo is that it presuposses that you could know for certain that the AI is trustworthy even though it is behaving in a very UF manner. Presumably it would be bypassing some controls to hold “hostages” to gain release. Given that you could not know for sure that its programmed trustworthiness was intact and not similarly subverted.