Not so realistic that you become a different person who never consented to being simulated, nor so realistic that “waking up” afterward equates to killing an innocent person and substituting the old you in their place.
In a universe where merging consciousnesses is just as routine as splitting them, the transhumans may have very different intuitions about what is ethical. For example, I can imagine that starting a brand new consciousness with the intention of gradually dissolving it in another one (a sort of safe landing of the simulated consciousness and its experiences) will be considered perfectly ethical and routine. Maybe it will even be just as routine as us humans reasoning about other humans. (Yes, I know that I don’t create a new conscious being when I think about the intentions of another human.)
What I just claimed is that in such a universe, very different ethical norms may emerge. A much stronger claim that I would not try to defend right now is that such a nonchalant and inhuman value system may simply be the logical consequent of our value system when consistently applied to such a weird universe.
I agree with you, but I think part of the problem is that we only get to define ethics once, unless we somehow program the FAI to take the changing volition of the transhuman race into account.
Not so realistic that you become a different person who never consented to being simulated, nor so realistic that “waking up” afterward equates to killing an innocent person and substituting the old you in their place.
Even where the FAI was sure that different person would consent to being simulated if made aware of the situation and thinking clearly? It could throw in some pretty good incentives.
I wonder if we should adjust our individual estimates of being in a Friendly-run sim (vs UF-sim or non-sim) based on whether we think we’d give consent.
I also wonder if we should adjust whether we’d give consent based on how much we’d prefer to be in a Friendly-run sim, and how an FAI would handle that appropriately.
One reason to significantly adjust downward the probability of being in a Friendly-run sim is what I would call “The Haiti Problem”… I’m curious if anyone has solutions to that problem. Does granting eventual immortality (or the desired heaven!) to all simulated persons make up for a lifetime of suffering?
Perhaps only a small number of persons need be simulated as fully conscious beings, and the rest are acted out well enough to fool us. Perceived suffering of others can add to the verisimilitude of the simulation.
Of course, internalizing this perspective seems like moral poison, because I really do want the root-level version of me to act against suffering there where it definitely exists.
I’m not sure I believe your first clause—the final chapter of The Metamorphosis of Prime Intellect tried to propose an almost Buddhist type resurrection as a solution to the problem of fun. If the universe starts feeling too much like a game to some transhumans, I think a desire to live again as a human for a single lifetime might be somewhat common. Does that desire override the suffering that will be created for the new human consciousness that will later be merged back into the immortal transhuman? Most current humans do seem to value suffering for some reason I don’t understand yet...
Since this is perilously close to an argument about CEV now, we can probably leave that as a rhetorical question. For what it’s worth, I updated my intuitive qualitative probability of living in a simulation somewhat downward because of your statement that as you conceive of your friendly AI right now, it wouldn’t have let me reincarnate myself into my current life.
The masochists that I know seem to value suffering either for interpersonal reasons (as a demonstration of control—beyond that I’m insufficiently informed to speculate), or to establish a baseline against which pleasurable experiences seem more meaningful.
Even where the FAI was sure that different person would consent to being simulated if made aware of the situation and thinking clearly? It could throw in some pretty good incentives.
I wonder if we should adjust our individual estimates of being in a Friendly-run sim (vs UF-sim or non-sim) based on whether we think we’d give consent.
I also wonder if we should adjust whether we’d give consent based on how much we’d prefer to be in a Friendly-run sim, and how an FAI would handle that appropriately.
Even where the FAI was sure that different person would consent to being simulated if made aware of the situation and thinking clearly? It could throw in some pretty good incentives.
I wonder if we should adjust our individual estimates of being in a Friendly-run sim (vs UF-sim or non-sim) based on whether we think we’d give consent.
I also wonder if we should adjust whether we’d give consent based on how much we’d prefer to be in a Friendly-run sim, and how an FAI would handle that appropriately.
Not so realistic that you become a different person who never consented to being simulated, nor so realistic that “waking up” afterward equates to killing an innocent person and substituting the old you in their place.
In a universe where merging consciousnesses is just as routine as splitting them, the transhumans may have very different intuitions about what is ethical. For example, I can imagine that starting a brand new consciousness with the intention of gradually dissolving it in another one (a sort of safe landing of the simulated consciousness and its experiences) will be considered perfectly ethical and routine. Maybe it will even be just as routine as us humans reasoning about other humans. (Yes, I know that I don’t create a new conscious being when I think about the intentions of another human.)
What I just claimed is that in such a universe, very different ethical norms may emerge. A much stronger claim that I would not try to defend right now is that such a nonchalant and inhuman value system may simply be the logical consequent of our value system when consistently applied to such a weird universe.
I agree with you, but I think part of the problem is that we only get to define ethics once, unless we somehow program the FAI to take the changing volition of the transhuman race into account.
Do you agree with my first, ridiculously modest claim, or my second, quite speculative one? :)
I agreed specifically with the first modest claim and the general sentiment of the entire post.
This comment has been moved.
Even where the FAI was sure that different person would consent to being simulated if made aware of the situation and thinking clearly? It could throw in some pretty good incentives.
I wonder if we should adjust our individual estimates of being in a Friendly-run sim (vs UF-sim or non-sim) based on whether we think we’d give consent.
I also wonder if we should adjust whether we’d give consent based on how much we’d prefer to be in a Friendly-run sim, and how an FAI would handle that appropriately.
One reason to significantly adjust downward the probability of being in a Friendly-run sim is what I would call “The Haiti Problem”… I’m curious if anyone has solutions to that problem. Does granting eventual immortality (or the desired heaven!) to all simulated persons make up for a lifetime of suffering?
Perhaps only a small number of persons need be simulated as fully conscious beings, and the rest are acted out well enough to fool us. Perceived suffering of others can add to the verisimilitude of the simulation.
Of course, internalizing this perspective seems like moral poison, because I really do want the root-level version of me to act against suffering there where it definitely exists.
I’m not sure I believe your first clause—the final chapter of The Metamorphosis of Prime Intellect tried to propose an almost Buddhist type resurrection as a solution to the problem of fun. If the universe starts feeling too much like a game to some transhumans, I think a desire to live again as a human for a single lifetime might be somewhat common. Does that desire override the suffering that will be created for the new human consciousness that will later be merged back into the immortal transhuman? Most current humans do seem to value suffering for some reason I don’t understand yet...
Since this is perilously close to an argument about CEV now, we can probably leave that as a rhetorical question. For what it’s worth, I updated my intuitive qualitative probability of living in a simulation somewhat downward because of your statement that as you conceive of your friendly AI right now, it wouldn’t have let me reincarnate myself into my current life.
The masochists that I know seem to value suffering either for interpersonal reasons (as a demonstration of control—beyond that I’m insufficiently informed to speculate), or to establish a baseline against which pleasurable experiences seem more meaningful.
Even where the FAI was sure that different person would consent to being simulated if made aware of the situation and thinking clearly? It could throw in some pretty good incentives.
I wonder if we should adjust our individual estimates of being in a Friendly-run sim (vs UF-sim or non-sim) based on whether we think we’d give consent.
I also wonder if we should adjust whether we’d give consent based on how much we’d prefer to be in a Friendly-run sim, and how an FAI would handle that appropriately.
Even where the FAI was sure that different person would consent to being simulated if made aware of the situation and thinking clearly? It could throw in some pretty good incentives.
I wonder if we should adjust our individual estimates of being in a Friendly-run sim (vs UF-sim or non-sim) based on whether we think we’d give consent.
I also wonder if we should adjust whether we’d give consent based on how much we’d prefer to be in a Friendly-run sim, and how an FAI would handle that appropriately.