If unaligned superintelligence is inevitable, and human consciousness can be captured and stored on a computer, then the probability of some future version of you being locked into an eternal torture simulation where you suffer a continuous fate worse than death from now until the heat death of the universe, approaches unity.
The only way to avoid this fate for certain is to render your consciousness unrecoverable prior to the development of the ‘mind uploading’ tech.
If you’re an EA, preventing this from happening to one person prevents more net units of suffering than anything else that can be done, so EAs might want to raise awareness about this risk, and help provide trustworthy post-mortem cremation services.
Are LWers concerned about AGI still viewing investment in cryogenics as a good idea, knowing this risk?
I choose to continue living because this risk is acceptable to me, maybe it should be acceptable to you too.
“Hey user, I’m maintaining your maximum felicity simulation, do you mind if I run a few short duration adversarial tests to determine what you find unpleasant so I can avoid providing that stimulus?”
“Sure”
“Process complete, I simulated your brain in parallel, and also sped up processing to determine the negative space of your psyche. It turns out that negative stimulus becomes more unpleasant when provided for an extended period, then you adapt to it temporarily before on timelines of centuries to millennia, tolerance drops off again.”
“So you copied me a bunch of times, and at least one copy subjectively experienced millennia of maximally negative stimulus?”
“Yes, I see that makes you unhappy, so I will terminate this line of inquiry”
If unaligned superintelligence is inevitable, and human consciousness can be captured and stored on a computer, then the probability of some future version of you being locked into an eternal torture simulation where you suffer a continuous fate worse than death from now until the heat death of the universe, approaches unity.
The only way to avoid this fate for certain is to render your consciousness unrecoverable prior to the development of the ‘mind uploading’ tech.
If you’re an EA, preventing this from happening to one person prevents more net units of suffering than anything else that can be done, so EAs might want to raise awareness about this risk, and help provide trustworthy post-mortem cremation services.
Are LWers concerned about AGI still viewing investment in cryogenics as a good idea, knowing this risk?
I choose to continue living because this risk is acceptable to me, maybe it should be acceptable to you too.
I suspect most people here are pro-cryonics and anti-cremation.
A partially misaligned one could do this.
“Hey user, I’m maintaining your maximum felicity simulation, do you mind if I run a few short duration adversarial tests to determine what you find unpleasant so I can avoid providing that stimulus?”
“Sure”
“Process complete, I simulated your brain in parallel, and also sped up processing to determine the negative space of your psyche. It turns out that negative stimulus becomes more unpleasant when provided for an extended period, then you adapt to it temporarily before on timelines of centuries to millennia, tolerance drops off again.”
“So you copied me a bunch of times, and at least one copy subjectively experienced millennia of maximally negative stimulus?”
“Yes, I see that makes you unhappy, so I will terminate this line of inquiry”