This sounds like the standard argument around negative utility.
if you weight negative utility quite highly then you could also come to the conclusion that the moral thing to do is to set to work on a virus to kill all humans as fast as possible.
You don’t even need mind-uploading. If you weight suffering highly enough then you could decide that it’s the right thing to do taking a trip to a refugee camp full of people who, on average, are likely to have hard, painful lives, and leaving a sarin gas bomb.
Put another way: if you encountered an infant with epidermolysis bullosa would you try to kill them, even against their wishes?
Negative utility needs a non-zero weight. I assert that it is possible to disagree with your scenarios (refugees, infant) and still be trapped by the OP, if negative utility is weighted to a low but non-zero level, such that avoiding the suffering of a human lifespan is never adequate to justify suicide. After all, everyone dies eventually, no need to speed up the process when there can be hope for improvement.
In this context, can death be viewed as a human right? Removing the certainty of death means that any non-zero weight to negative utility can result in an arbtrarily large aggregate negative utility in the (potentially unlimited) lifetime of an individual confined in a hell simulation.
The quickest way to make me start viewing a scifi *topia as a dystopia is to have suicide banned in a world of (potential) immortals. To me the “right to death” is an essential once immortality is possible.
Still, I get the impression that saying they’ll die at some point anyway is a bit of a dodge of the challenge.
After all, nothing is truly infinite. Eventually entropy will necessitate an end to any simulated hell.
A suicide ban in a world of immortals is an extreme case of a policy of force-feeding hunger striking prisoners. The latter is normal in the modern United States, so it is safe to assume that if the Age of Em begins in the United States, secure deletion of an Em would likely be difficult, and abetting it, especially for prisoners, may be illegal.
I assert that the addition of potential immortality, and abandonment of ‘human scale’ times for brains built to care about human timescales creates a special case. Furthermore, a living human has, by virtue of the frailty of the human body, limits on the amount of suffering it can endure. An Em does not, so preventing an Em, or potential Em from being trapped in a torture-sim and tossed into the event horizon of a black hole to wait out the heat death of the universe is preventing something that is simply a different class of harm than the privations humans endure today.
This sounds like the standard argument around negative utility.
if you weight negative utility quite highly then you could also come to the conclusion that the moral thing to do is to set to work on a virus to kill all humans as fast as possible.
You don’t even need mind-uploading. If you weight suffering highly enough then you could decide that it’s the right thing to do taking a trip to a refugee camp full of people who, on average, are likely to have hard, painful lives, and leaving a sarin gas bomb.
Put another way: if you encountered an infant with epidermolysis bullosa would you try to kill them, even against their wishes?
Negative utility needs a non-zero weight. I assert that it is possible to disagree with your scenarios (refugees, infant) and still be trapped by the OP, if negative utility is weighted to a low but non-zero level, such that avoiding the suffering of a human lifespan is never adequate to justify suicide. After all, everyone dies eventually, no need to speed up the process when there can be hope for improvement.
In this context, can death be viewed as a human right? Removing the certainty of death means that any non-zero weight to negative utility can result in an arbtrarily large aggregate negative utility in the (potentially unlimited) lifetime of an individual confined in a hell simulation.
The quickest way to make me start viewing a scifi *topia as a dystopia is to have suicide banned in a world of (potential) immortals. To me the “right to death” is an essential once immortality is possible.
Still, I get the impression that saying they’ll die at some point anyway is a bit of a dodge of the challenge. After all, nothing is truly infinite. Eventually entropy will necessitate an end to any simulated hell.
A suicide ban in a world of immortals is an extreme case of a policy of force-feeding hunger striking prisoners. The latter is normal in the modern United States, so it is safe to assume that if the Age of Em begins in the United States, secure deletion of an Em would likely be difficult, and abetting it, especially for prisoners, may be illegal.
I assert that the addition of potential immortality, and abandonment of ‘human scale’ times for brains built to care about human timescales creates a special case. Furthermore, a living human has, by virtue of the frailty of the human body, limits on the amount of suffering it can endure. An Em does not, so preventing an Em, or potential Em from being trapped in a torture-sim and tossed into the event horizon of a black hole to wait out the heat death of the universe is preventing something that is simply a different class of harm than the privations humans endure today.