Disclaimer: in this post I touch on some very dark and disturbing topics. I’m talking about suicide, my reasoning may be wrong and should not be used to retroactively justify suicide.
I’ve been stuck on s-risks for over a month now. My life has been turned upside down since I first learned about this subject. So today I’m sharing my thoughts with you to possibly find out what you think and see other points of view.
Suffering risks (s-risks) are risks involving an astronomical amount of suffering, far more than the suffering that has taken place on Earth so far. The ones I’m going to focus on in this post are those related to a general AI (or even ASI) and which would affect us humans today, directly. The scenario that concerns me is that an ASI is torturing mankind until the end of time. Why is this? I don’t know, though. Could it be malicious? Could it choose its utility function to maximize human suffering? Could a paperclip maximizer torture us if it’s an energy source or a power to blackmail a benevolent AI? I’m not an AI expert, so I have no weight in the “will we succeed in controlling AGI or not” debate. I feel that, given the extent to which opinions are divided, anything can happen and that no one can therefore state with 100% certainty that s-risks won’t occur. What’s more, we’re talking about an intelligence superior to our own, and therefore, by definition, unpredictable. The point I want to make in this post is centered on the non-zero probability that the creation of an agi will lead us to an eternal hell.
When we talk about things worse than death, about torture, I think that the human brain encounters a certain number of cognitive biases that push it to minimize the thing or simply ignore it because it’s too uncomfortable. So I encourage you to work through these cognitive biases to get an objective view on the subject. One of the things that is often underestimated is how bad suffering can be. Our bodies are made up of a huge number of ultra-sensitive nerves that can be activated to send unbearable signals to the brain. It’s so bad. Suffering can reach such high scales, it’s appalling, horrifying. The worst possible pain seems to be fire. Apparently, people who come out of a fire and have been burned beg the firefighters to finish them off, such is the pain.
Even if s-risks are one chance in a billion, their severity makes up for it, due to their extreme disutility. We’re in a Pascal’s mugging situation, but from a negative point of view, where the trade-off is between potential infinite years of suffering and suicide in order to avoid them for sure. Why shouldn’t we be able to act only now? In the case of a hard take-off, where AGI becomes superintelligent in a short space of time, we’d lose before we even knew there was a fight, and our fate would be sealed.
One argument that could be made against suicide is quantum immortality and potentially quantum torment. This would be a situation where we would be in permanent agony, and therefore a form of hell. However, this is already the default outcome for each and every one of us, as we are already made to die one day. There’s also the chance of being resurrected. But this may be impossible, and there’s also the problem of individuality, because a clone is exactly like me, but my consciousness isn’t in its body. So suicide seems to be a net positive with regard to s-risks, as it would potentially avoid s-risks for sure, or at least reduce their probability (only from a personal point of view). This means choosing a certain bad outcome (suicide/non-existence) rather than an infinitely bad but uncertain outcome (continuing to live and therefore taking the risk that s-risks will take place).
I understand that my reasoning is disturbing. Does anyone know anything more and would be able to say that the risk of being tortured until the end of time is impossible? I’m curious to know what you think about all this, because you’re certainly the only community that can talk about this in a reasoned and rational way.
[Question] How likely is it that AI will torture us until the end of time?
Disclaimer: in this post I touch on some very dark and disturbing topics. I’m talking about suicide, my reasoning may be wrong and should not be used to retroactively justify suicide.
I’ve been stuck on s-risks for over a month now. My life has been turned upside down since I first learned about this subject. So today I’m sharing my thoughts with you to possibly find out what you think and see other points of view.
Suffering risks (s-risks) are risks involving an astronomical amount of suffering, far more than the suffering that has taken place on Earth so far. The ones I’m going to focus on in this post are those related to a general AI (or even ASI) and which would affect us humans today, directly. The scenario that concerns me is that an ASI is torturing mankind until the end of time. Why is this? I don’t know, though. Could it be malicious? Could it choose its utility function to maximize human suffering? Could a paperclip maximizer torture us if it’s an energy source or a power to blackmail a benevolent AI? I’m not an AI expert, so I have no weight in the “will we succeed in controlling AGI or not” debate. I feel that, given the extent to which opinions are divided, anything can happen and that no one can therefore state with 100% certainty that s-risks won’t occur. What’s more, we’re talking about an intelligence superior to our own, and therefore, by definition, unpredictable. The point I want to make in this post is centered on the non-zero probability that the creation of an agi will lead us to an eternal hell.
When we talk about things worse than death, about torture, I think that the human brain encounters a certain number of cognitive biases that push it to minimize the thing or simply ignore it because it’s too uncomfortable. So I encourage you to work through these cognitive biases to get an objective view on the subject. One of the things that is often underestimated is how bad suffering can be. Our bodies are made up of a huge number of ultra-sensitive nerves that can be activated to send unbearable signals to the brain. It’s so bad. Suffering can reach such high scales, it’s appalling, horrifying. The worst possible pain seems to be fire. Apparently, people who come out of a fire and have been burned beg the firefighters to finish them off, such is the pain.
Even if s-risks are one chance in a billion, their severity makes up for it, due to their extreme disutility. We’re in a Pascal’s mugging situation, but from a negative point of view, where the trade-off is between potential infinite years of suffering and suicide in order to avoid them for sure. Why shouldn’t we be able to act only now? In the case of a hard take-off, where AGI becomes superintelligent in a short space of time, we’d lose before we even knew there was a fight, and our fate would be sealed.
One argument that could be made against suicide is quantum immortality and potentially quantum torment. This would be a situation where we would be in permanent agony, and therefore a form of hell. However, this is already the default outcome for each and every one of us, as we are already made to die one day. There’s also the chance of being resurrected. But this may be impossible, and there’s also the problem of individuality, because a clone is exactly like me, but my consciousness isn’t in its body. So suicide seems to be a net positive with regard to s-risks, as it would potentially avoid s-risks for sure, or at least reduce their probability (only from a personal point of view). This means choosing a certain bad outcome (suicide/non-existence) rather than an infinitely bad but uncertain outcome (continuing to live and therefore taking the risk that s-risks will take place).
I understand that my reasoning is disturbing. Does anyone know anything more and would be able to say that the risk of being tortured until the end of time is impossible? I’m curious to know what you think about all this, because you’re certainly the only community that can talk about this in a reasoned and rational way.