While discussing a new powerful tech, people often focus on what could go horribly wrong, forgetting to consider what could go gloriously right.
What could could go gloriously right with mind uploading? It could eliminate involuntary death, saving trillions of future lives. This consequence alone massively outweighs the corresponding X- and S-risks.
At least from the orthodox QALY perspective on “weighing lives”, the benefits of WBE don’t outweigh the S-risks, because for any given number of lives, the resources required to make them all suffer are lesser than the resources required for the glorious version.
The benefits of eventually developing WBE do outweigh the X-risks, if we assume that
human lives are the only ones that count,
WBE’d humans still count as humans, and
WBE is much more resource-efficient than anything else that future society could do to support human life.
However, orthodox QALY reasoning of this kind can’t justify developing WBE soon (rather than, say, after a Long Reflection), unless there are really good stories about how to avoid both the X-risks and the S-risks.
As far as I know, mind uploading is the only tech that can reduce the risk of death (from all causes) to almost zero. It is almost impossible to destroy a mind that is running on resilient distributed hardware with tons of backups hidden in several star systems.
There is a popular idea that some very large amount of suffering is worse than death. I don’t subscribe to it. If I’m tortured for X billions of years, and then my mind is repaired, then this fate is still much better than permanent death. There is simply nothing worse than permanent death—because (by definition) it cannot be repaired. And everything else can be repaired, including the damage from any amount of suffering.
In such calculations, I would consider 1 human permadeath equal to at least 1 human life that is experiencing the worst possible suffering until the end of the universe.
I also don’t see how eliminating any arbitrary large amount of suffering could be preferable to saving 1 life. Unless the suffering leads to permadeath, the sufferers can get over it. The dead—cannot. Bad feelings are vastly less important than saved lives.
It’s a good idea to reduce suffering. But the S-risk is trivially eliminated from the equitation if the tech in question is life-saving.
orthodox QALY reasoning of this kind can’t justify developing WBE soon (rather than, say, after a Long Reflection)
There are currently ~45 million permadeaths per year. Thus, any additional year without widely accessible mind uploading means there is an equivalent of 45+ million more humans experiencing the worst possible suffering until the end of the universe. In 10 years, it’s a half a billion. In 1000 years, it’s a half a trillion. This high cost of the Long Reflection is one more reason why it should never be forced upon humanity.
I would consider 1 human permadeath equal to at least 1 human life that is experiencing the worst possible suffering until the end of the universe.
This is so incredibly far from where I would place the equivalence, and I think where almost anyone would place it, that I’m baffled. You really mean this?
There is an ancient and (unfortunately) still very popular association between death and sleep / rest / peace / tranquility.
The association is so deeply engraved, it is routinely used by most people who have to speak about death. E.g. “rest in peace”, “put to sleep”, “he is in a better place now” etc.
The association is harmful.
The association suggests that death could be a valid solution to pain, which is deeply wrong.
It’s the same wrongness as suggesting to kill a child to make the child less sad.
Technically, the child will not experience sadness anymore. But infanticide is not a sane person’s solution to sadness.
The sane solution is to find a way to make the child less sad (without killing them!).
The sane solution to suffering is to reduce suffering. Without killing the sufferer.
For example, if a cancer patient is in great pain, the most ethical solution is to cure them from cancer, and use efficient painkillers during the process. If there is no cure, then utilize cryonics to transport them into the future where such a cure becomes available. Killing the patient because they’re in pain is a sub-optimal solution (to put it mildly).
I can’t imagine any situation where permadeath is preferable to suffering. With enough tech and time, all kinds of suffering can be eliminated, and their effects can be reversed. But permadeath is, by definition, non-reversible and non-repairable.
If one must choose between a permanent loss of human life and some temporary discomfort, it doesn’t make sense to prefer the permanent loss of life, regardless of the intensity of the discomfort.
(I agree wholeheartedly with almost everything you’ve said here, and have strong upvoted, but I want to make space for the fact that some people don’t make sense, and some people reflectively endorse not making sense, and so while I will argue against their preference for death over discomfort, I will also fight for their right to make the wrong choice for themselves, just as I fight for your and my right to make the correct choice for ourselves. Unless there is freedom for people to make wrong choices, we can never move beyond a socially-endorsed “right” choice to something Actually Better.)
The thing is: regardless of how bad is the worst possible discomfort, dying is still a rather stupid idea, even if you have to endure the discomfort for millions of years. Because if you live long enough, you can find a way to fix any discomfort.
There is a popular idea that some very large amount of suffering is worse than death. I don’t subscribe to it. If I’m tortured for X billions of years, and then my mind is repaired, then this fate is still much better than permanent death. There is simply nothing worse than permanent death—because (by definition) it cannot be repaired.
This sweeps a large amount of philosphical issues under the rug by begging the conclusion (that death is the worst thing), and then using that justify itself (death is the worst thing, and if you die, you’re stuck dead, so that’s the worst thing).
I predict that most (all?) ethical theories that assume that some amount of suffering is worse than death—have internal inconsistencies.
My prediction is based on the following assumption:
permanent death is the only brain state that can’t be reversed, given sufficient tech and time
The non-reversibility is the key.
For example, if your goal is to maximize happiness of every human, you can achieve more happiness if none of the humans ever die, even if some humans will have periods of intense and prolonged suffering. Because you can increase happiness of the humans who suffered, but you can’t increase happiness of the humans who are non-reversibly dead.
If your goal is to minimize suffering (without killing people), then you should avoid killing people. Killing people includes withholding life extension technologies (like mind uploading), even if radical life extension will cause some people to suffer for millions of years. You can decrease suffering of the humans who are suffering, but you can’t do that for the humans who are non-reversibly dead.
The mere existence of the option of voluntary immortality necessitates some quite interesting changes in ethical theories.
Personally, I simply don’t want to die, regardless of the circumstances. The circumstances might include any arbitrary large amount of suffering. If a future-me ever begs for death, consider him in the need of some brain repair, not in the need of death.
While I (a year late) tentatively agree with you (though a million years of suffering is a hard thing to swallow compared to the instinctually almost mundane matter of death) I think there’s an assumption in your argument that bears inspection. Namely, I believe you are maximizing happiness at a given instance in time—the present, or the limit as time approaches infinity, etc. (Or, perhaps, you are predicating the calculations on the possibility of escaping the heat death of the universe, and being truly immortal for eternity.) A (possibly) alternate optimization goal—maximize human happiness, summed over time. See, I was thinking, the other day, and it seems possible we may never evade the heat death of the universe. In such a case, if you only value the final state, nothing we do matters, whether we suffer or go extinct tomorrow. At the very least, this metric is not helpful, because it cannot distinguish between any two states. So a different metric must be chosen. A reasonable substitute seems to me to be to effectively take the integral of human happiness over time, sum it up. The happy week you had last week is not canceled out by a mildly depressing day today, for instance—it still counts. Conversely, suffering for a long time may not be automatically balanced out the moment you stop suffering (though I’ll grant this goes a little against my instincts). If you DO assume infinite time, though, your argument may return to being automatically true. I’m not sure that’s an assumption that should be confidently made, though. If you don’t assume infinite time, I think it matters again what precise value you put on death, vs incredible suffering, and that may simply be a matter of opinion, of precise differences in two people’s terminal goals.
(Side note: I’ve idly speculated about expanding the above optimization criteria for the case of all-possible-universes—I forget the exact train of thought, but it ended up more or less behaving in a manner such that you optimize the probability-weighted ratio of good outcomes to bad outcomes (summed across time, I guess). Needs more thought to become more rigorous etc.)
Our current understanding of physics (and of our future capabilities) is so limited, I assume that our predictions on how the universe will behave trillions years from now are worthless.
I think we can safely postpone the entire question to the times after we achieve a decent understanding of physics, after we became much smarter, and after we can allow ourselves to invest some thousands of years of deep thought on the topic.
It’s not just an AI safety risk, it’s also an S-risk in it’s own right.
While discussing a new powerful tech, people often focus on what could go horribly wrong, forgetting to consider what could go gloriously right.
What could could go gloriously right with mind uploading? It could eliminate involuntary death, saving trillions of future lives. This consequence alone massively outweighs the corresponding X- and S-risks.
At least from the orthodox QALY perspective on “weighing lives”, the benefits of WBE don’t outweigh the S-risks, because for any given number of lives, the resources required to make them all suffer are lesser than the resources required for the glorious version.
The benefits of eventually developing WBE do outweigh the X-risks, if we assume that
human lives are the only ones that count,
WBE’d humans still count as humans, and
WBE is much more resource-efficient than anything else that future society could do to support human life.
However, orthodox QALY reasoning of this kind can’t justify developing WBE soon (rather than, say, after a Long Reflection), unless there are really good stories about how to avoid both the X-risks and the S-risks.
As far as I know, mind uploading is the only tech that can reduce the risk of death (from all causes) to almost zero. It is almost impossible to destroy a mind that is running on resilient distributed hardware with tons of backups hidden in several star systems.
There is a popular idea that some very large amount of suffering is worse than death. I don’t subscribe to it. If I’m tortured for X billions of years, and then my mind is repaired, then this fate is still much better than permanent death. There is simply nothing worse than permanent death—because (by definition) it cannot be repaired. And everything else can be repaired, including the damage from any amount of suffering.
In such calculations, I would consider 1 human permadeath equal to at least 1 human life that is experiencing the worst possible suffering until the end of the universe.
I also don’t see how eliminating any arbitrary large amount of suffering could be preferable to saving 1 life. Unless the suffering leads to permadeath, the sufferers can get over it. The dead—cannot. Bad feelings are vastly less important than saved lives.
It’s a good idea to reduce suffering. But the S-risk is trivially eliminated from the equitation if the tech in question is life-saving.
There are currently ~45 million permadeaths per year. Thus, any additional year without widely accessible mind uploading means there is an equivalent of 45+ million more humans experiencing the worst possible suffering until the end of the universe. In 10 years, it’s a half a billion. In 1000 years, it’s a half a trillion. This high cost of the Long Reflection is one more reason why it should never be forced upon humanity.
This is so incredibly far from where I would place the equivalence, and I think where almost anyone would place it, that I’m baffled. You really mean this?
There is an ancient and (unfortunately) still very popular association between death and sleep / rest / peace / tranquility.
The association is so deeply engraved, it is routinely used by most people who have to speak about death. E.g. “rest in peace”, “put to sleep”, “he is in a better place now” etc.
The association is harmful.
The association suggests that death could be a valid solution to pain, which is deeply wrong.
It’s the same wrongness as suggesting to kill a child to make the child less sad.
Technically, the child will not experience sadness anymore. But infanticide is not a sane person’s solution to sadness.
The sane solution is to find a way to make the child less sad (without killing them!).
The sane solution to suffering is to reduce suffering. Without killing the sufferer.
For example, if a cancer patient is in great pain, the most ethical solution is to cure them from cancer, and use efficient painkillers during the process. If there is no cure, then utilize cryonics to transport them into the future where such a cure becomes available. Killing the patient because they’re in pain is a sub-optimal solution (to put it mildly).
I can’t imagine any situation where permadeath is preferable to suffering. With enough tech and time, all kinds of suffering can be eliminated, and their effects can be reversed. But permadeath is, by definition, non-reversible and non-repairable.
If one must choose between a permanent loss of human life and some temporary discomfort, it doesn’t make sense to prefer the permanent loss of life, regardless of the intensity of the discomfort.
(I agree wholeheartedly with almost everything you’ve said here, and have strong upvoted, but I want to make space for the fact that some people don’t make sense, and some people reflectively endorse not making sense, and so while I will argue against their preference for death over discomfort, I will also fight for their right to make the wrong choice for themselves, just as I fight for your and my right to make the correct choice for ourselves. Unless there is freedom for people to make wrong choices, we can never move beyond a socially-endorsed “right” choice to something Actually Better.)
Something is handicapping your ability to imagine what the “worst possible discomfort” would be.
The thing is: regardless of how bad is the worst possible discomfort, dying is still a rather stupid idea, even if you have to endure the discomfort for millions of years. Because if you live long enough, you can find a way to fix any discomfort.
I wrote in more detail about it here.
This sweeps a large amount of philosphical issues under the rug by begging the conclusion (that death is the worst thing), and then using that justify itself (death is the worst thing, and if you die, you’re stuck dead, so that’s the worst thing).
I predict that most (all?) ethical theories that assume that some amount of suffering is worse than death—have internal inconsistencies.
My prediction is based on the following assumption:
permanent death is the only brain state that can’t be reversed, given sufficient tech and time
The non-reversibility is the key.
For example, if your goal is to maximize happiness of every human, you can achieve more happiness if none of the humans ever die, even if some humans will have periods of intense and prolonged suffering. Because you can increase happiness of the humans who suffered, but you can’t increase happiness of the humans who are non-reversibly dead.
If your goal is to minimize suffering (without killing people), then you should avoid killing people. Killing people includes withholding life extension technologies (like mind uploading), even if radical life extension will cause some people to suffer for millions of years. You can decrease suffering of the humans who are suffering, but you can’t do that for the humans who are non-reversibly dead.
The mere existence of the option of voluntary immortality necessitates some quite interesting changes in ethical theories.
Personally, I simply don’t want to die, regardless of the circumstances. The circumstances might include any arbitrary large amount of suffering. If a future-me ever begs for death, consider him in the need of some brain repair, not in the need of death.
While I (a year late) tentatively agree with you (though a million years of suffering is a hard thing to swallow compared to the instinctually almost mundane matter of death) I think there’s an assumption in your argument that bears inspection. Namely, I believe you are maximizing happiness at a given instance in time—the present, or the limit as time approaches infinity, etc. (Or, perhaps, you are predicating the calculations on the possibility of escaping the heat death of the universe, and being truly immortal for eternity.) A (possibly) alternate optimization goal—maximize human happiness, summed over time. See, I was thinking, the other day, and it seems possible we may never evade the heat death of the universe. In such a case, if you only value the final state, nothing we do matters, whether we suffer or go extinct tomorrow. At the very least, this metric is not helpful, because it cannot distinguish between any two states. So a different metric must be chosen. A reasonable substitute seems to me to be to effectively take the integral of human happiness over time, sum it up. The happy week you had last week is not canceled out by a mildly depressing day today, for instance—it still counts. Conversely, suffering for a long time may not be automatically balanced out the moment you stop suffering (though I’ll grant this goes a little against my instincts). If you DO assume infinite time, though, your argument may return to being automatically true. I’m not sure that’s an assumption that should be confidently made, though. If you don’t assume infinite time, I think it matters again what precise value you put on death, vs incredible suffering, and that may simply be a matter of opinion, of precise differences in two people’s terminal goals.
(Side note: I’ve idly speculated about expanding the above optimization criteria for the case of all-possible-universes—I forget the exact train of thought, but it ended up more or less behaving in a manner such that you optimize the probability-weighted ratio of good outcomes to bad outcomes (summed across time, I guess). Needs more thought to become more rigorous etc.)
Our current understanding of physics (and of our future capabilities) is so limited, I assume that our predictions on how the universe will behave trillions years from now are worthless.
I think we can safely postpone the entire question to the times after we achieve a decent understanding of physics, after we became much smarter, and after we can allow ourselves to invest some thousands of years of deep thought on the topic.