using a computation that requires a few orders of magnitude more energy than humanity currently produces per decade
Compute might get more expensive, not cheaper, because it would be possible to make better use of it (running minds, not stretching keys). Then it’s weighing its marginal use against access to the sealed data.
Plausible. This depends on the resource/value curve at very high resource levels; ie, are its values such that running extra minds has diminishing returns, such that it eventually starts allocating resources to other things like recovering mind-states from its past, or does it get value that’s more linear-ish in resources spent. Given that we ourselves are likely to be very resource-inefficient to run, I suspect humans would find ourselves in a similar situation. Ie, unless the decryption cost greatly overshot, an AI that is aligned-as-in-keeps-humans-alive would also spend the resources to break a seal like this.
we ourselves are likely to be very resource-inefficient to run [...] an AI that is aligned-as-in-keeps-humans-alive would also spend the resources to break a seal like this
That AI should mitigate something, is compatible with it being regrettable intentionally inflicted damage. In contrast, resource-inefficiency of humans is not something we introduced on purpose.
Compute might get more expensive, not cheaper, because it would be possible to make better use of it (running minds, not stretching keys). Then it’s weighing its marginal use against access to the sealed data.
Plausible. This depends on the resource/value curve at very high resource levels; ie, are its values such that running extra minds has diminishing returns, such that it eventually starts allocating resources to other things like recovering mind-states from its past, or does it get value that’s more linear-ish in resources spent. Given that we ourselves are likely to be very resource-inefficient to run, I suspect humans would find ourselves in a similar situation. Ie, unless the decryption cost greatly overshot, an AI that is aligned-as-in-keeps-humans-alive would also spend the resources to break a seal like this.
That AI should mitigate something, is compatible with it being regrettable intentionally inflicted damage. In contrast, resource-inefficiency of humans is not something we introduced on purpose.