In a universe where merging consciousnesses is just as routine as splitting them, the transhumans may have very different intuitions about what is ethical. For example, I can imagine that starting a brand new consciousness with the intention of gradually dissolving it in another one (a sort of safe landing of the simulated consciousness and its experiences) will be considered perfectly ethical and routine. Maybe it will even be just as routine as us humans reasoning about other humans. (Yes, I know that I don’t create a new conscious being when I think about the intentions of another human.)
What I just claimed is that in such a universe, very different ethical norms may emerge. A much stronger claim that I would not try to defend right now is that such a nonchalant and inhuman value system may simply be the logical consequent of our value system when consistently applied to such a weird universe.
I agree with you, but I think part of the problem is that we only get to define ethics once, unless we somehow program the FAI to take the changing volition of the transhuman race into account.
In a universe where merging consciousnesses is just as routine as splitting them, the transhumans may have very different intuitions about what is ethical. For example, I can imagine that starting a brand new consciousness with the intention of gradually dissolving it in another one (a sort of safe landing of the simulated consciousness and its experiences) will be considered perfectly ethical and routine. Maybe it will even be just as routine as us humans reasoning about other humans. (Yes, I know that I don’t create a new conscious being when I think about the intentions of another human.)
What I just claimed is that in such a universe, very different ethical norms may emerge. A much stronger claim that I would not try to defend right now is that such a nonchalant and inhuman value system may simply be the logical consequent of our value system when consistently applied to such a weird universe.
I agree with you, but I think part of the problem is that we only get to define ethics once, unless we somehow program the FAI to take the changing volition of the transhuman race into account.
Do you agree with my first, ridiculously modest claim, or my second, quite speculative one? :)
I agreed specifically with the first modest claim and the general sentiment of the entire post.
This comment has been moved.