This is a bit more broad than cryonics, but let’s consider more specific possible causes of extreme torture. Here’re the ones that occurred to me:
An AI runs or threatens to run torture simulations as a disincentive. This is entirely a manipulation technique and is instrumental to whatever goals it has, whether benevolent or neutral.
The programmers may work specifically to prevent this. However—MIRI’s current stance is that it is safer to let the AI design a utility function for itself. I think that this is the most likely and least worrisome way torture simulations could happen (small in scope and for the best).
An AI is programmed to be benevolent, but finds for some reason that suffering is terminally valuable, perhaps due to following a logical and unforeseen conclusion of a human-designed utility function.
I think this is a problematic scenario, and much worse than most AI design failures, because it ends with humans being tortured eternally, spending 3% of their existence in hell, or whatever, rather than just paperclips.
An AI is programmed to be malevolent.
This seems very very unlikely, given the amount of resources and people required to create an AI and the immense and obvious disutility in such a project.
An AI is programmed to obey someone who is malevolent.
Hopefully this will be prevented by, like, MIRI. Ethics boards and screening processes too.
Aliens run torture simulations of humans as punishment for defecting in an intergalactic acausal agreement.
This is the bloody AI’s problem, not ours.
A country becomes a dystopia that tortures people.
Possible but very unlikely for political and economic reasons.
Incredibly unlikely: an AI is not going to structure itself so it works to fulfill the inverse of its utility function as the result of a single bit flip.
Sure it is if it’s the right bit, but averting this sort of bug when it’s important is a solved problem of software engineering, not value-alignment-complete.
However, if we assume that everything possible exists, such AI exists somewhere in the universe and is torturing a copy of me. And it is disturbing thought.
As I have said before, the BB is a scattered being in your model, but you yourself might be a scattered being in the BB’s model. So there are not two worlds, a real one and a BB one. There are just two real ones. A better way to think about it might be like special relativity, where each observer has a resting frame and might be moving relative to other ones. In the same way each observer has a reference frame where they are real.
If there are two AIs, and one is paperclip maximiser, and the other is benevolent AI, the paperclip maximiser msy start to torture humans to get bargain power over benevolent AI. The human torture becomes currency.
Possible, seems unlikely. Requires two AI with different alignments, requires benevolent AI to respond to that sort of threat. Also falls under the first point.
This is a bit more broad than cryonics, but let’s consider more specific possible causes of extreme torture. Here’re the ones that occurred to me:
An AI runs or threatens to run torture simulations as a disincentive. This is entirely a manipulation technique and is instrumental to whatever goals it has, whether benevolent or neutral.
The programmers may work specifically to prevent this. However—MIRI’s current stance is that it is safer to let the AI design a utility function for itself. I think that this is the most likely and least worrisome way torture simulations could happen (small in scope and for the best).
An AI is programmed to be benevolent, but finds for some reason that suffering is terminally valuable, perhaps due to following a logical and unforeseen conclusion of a human-designed utility function.
I think this is a problematic scenario, and much worse than most AI design failures, because it ends with humans being tortured eternally, spending 3% of their existence in hell, or whatever, rather than just paperclips.
An AI is programmed to be malevolent.
This seems very very unlikely, given the amount of resources and people required to create an AI and the immense and obvious disutility in such a project.
An AI is programmed to obey someone who is malevolent.
Hopefully this will be prevented by, like, MIRI. Ethics boards and screening processes too.
Aliens run torture simulations of humans as punishment for defecting in an intergalactic acausal agreement.
This is the bloody AI’s problem, not ours.
A country becomes a dystopia that tortures people.
Possible but very unlikely for political and economic reasons.
Thoughts? Please add.
There was an error in AI goal system, and + is now -
Incredibly unlikely: an AI is not going to structure itself so it works to fulfill the inverse of its utility function as the result of a single bit flip.
Sure it is if it’s the right bit, but averting this sort of bug when it’s important is a solved problem of software engineering, not value-alignment-complete.
However, if we assume that everything possible exists, such AI exists somewhere in the universe and is torturing a copy of me. And it is disturbing thought.
If everything that’s possible exists, so do Boltzmann brains. We need some way to quantify existence, such as by likelyhood.
Don’t see any problem with BB existence. For each BB exists a real world there the same observer-moment is justified.
As I have said before, the BB is a scattered being in your model, but you yourself might be a scattered being in the BB’s model. So there are not two worlds, a real one and a BB one. There are just two real ones. A better way to think about it might be like special relativity, where each observer has a resting frame and might be moving relative to other ones. In the same way each observer has a reference frame where they are real.
If there are two AIs, and one is paperclip maximiser, and the other is benevolent AI, the paperclip maximiser msy start to torture humans to get bargain power over benevolent AI. The human torture becomes currency.
Possible, seems unlikely. Requires two AI with different alignments, requires benevolent AI to respond to that sort of threat. Also falls under the first point.