Plausible. This depends on the resource/value curve at very high resource levels; ie, are its values such that running extra minds has diminishing returns, such that it eventually starts allocating resources to other things like recovering mind-states from its past, or does it get value that’s more linear-ish in resources spent. Given that we ourselves are likely to be very resource-inefficient to run, I suspect humans would find ourselves in a similar situation. Ie, unless the decryption cost greatly overshot, an AI that is aligned-as-in-keeps-humans-alive would also spend the resources to break a seal like this.
we ourselves are likely to be very resource-inefficient to run [...] an AI that is aligned-as-in-keeps-humans-alive would also spend the resources to break a seal like this
That AI should mitigate something, is compatible with it being regrettable intentionally inflicted damage. In contrast, resource-inefficiency of humans is not something we introduced on purpose.
Plausible. This depends on the resource/value curve at very high resource levels; ie, are its values such that running extra minds has diminishing returns, such that it eventually starts allocating resources to other things like recovering mind-states from its past, or does it get value that’s more linear-ish in resources spent. Given that we ourselves are likely to be very resource-inefficient to run, I suspect humans would find ourselves in a similar situation. Ie, unless the decryption cost greatly overshot, an AI that is aligned-as-in-keeps-humans-alive would also spend the resources to break a seal like this.
That AI should mitigate something, is compatible with it being regrettable intentionally inflicted damage. In contrast, resource-inefficiency of humans is not something we introduced on purpose.