I think it’s fairly unlikely that suicide becomes impossible in AI catastrophes. The AI would have to be anti-aligned, which means creating such an AI would require precise targeting in the AI design space the same way a Friendly AI does. However, given the extreme disvalue a hyperexistential catastrophe produces, such scenarios are perhaps still worth considering, especially for negative utilitarians.
I think so. By symmetry, imperfect anti-alignment will destroy almost all the disvalue the same way imperfect alignment will destroy almost all the value. Thus, the overwhelming majority of alignment problems are solved by default with regard to hyperexistential risks.
More intuitively, problems become much easier when there isn’t a powerful optimization process to push against. E.g. computer security is hard because there are intelligent agents out there trying to break your system, not because cosmic rays will randomly flip some bits in your memory.
Huh, good question. Initially I assumed the answer was “yes, basically” and thought the probability was high enough that it wasn’t worth getting into. But the scenarios you mention are making me less sure of that.
I’d love to get input from others on this. It’s actually a question I plan on investigating further anyway as I do some research and decided whether or not I want to sign up for cryonics.
I think it’s fairly unlikely that suicide becomes impossible in AI catastrophes. The AI would have to be anti-aligned, which means creating such an AI would require precise targeting in the AI design space the same way a Friendly AI does. However, given the extreme disvalue a hyperexistential catastrophe produces, such scenarios are perhaps still worth considering, especially for negative utilitarians.
I think so. By symmetry, imperfect anti-alignment will destroy almost all the disvalue the same way imperfect alignment will destroy almost all the value. Thus, the overwhelming majority of alignment problems are solved by default with regard to hyperexistential risks.
More intuitively, problems become much easier when there isn’t a powerful optimization process to push against. E.g. computer security is hard because there are intelligent agents out there trying to break your system, not because cosmic rays will randomly flip some bits in your memory.
Huh, good question. Initially I assumed the answer was “yes, basically” and thought the probability was high enough that it wasn’t worth getting into. But the scenarios you mention are making me less sure of that.
I’d love to get input from others on this. It’s actually a question I plan on investigating further anyway as I do some research and decided whether or not I want to sign up for cryonics.