This may not be to the liking of non-machine intelligence, and at best costly to those involved, if not infeasible due to speed-of-light physical separation.
Yes, I agree. That’s why I tried to make clear that the necessary technology is being assumed into existence.
But it is necessary for one of A+ or B+ to reincarnate first.
In joint verified-source reincarnation, or what I called secure merger, there is just A’. Let’s ask, after A’ is constructed, but before A and B have transferred their resources, is there any incentive for either A or B to cheat and fail to transfer? Not if they programed A’ in such a way that A and B are each individually made no worse off by transferring. For example, A and B could program A’ so that it would act exactly as A would act if B fails to transfer, and exactly as B would act if A fails to transfer, and carry out the cooperative solution only if both sides transfer.
I made my opening claim (equivalence of reincarnation on independent vs. shared substrates) so I didn’t have to fuss about notation. Do you disagree?
As for making each individual’s transfer risk-free, that was my point—the new version of oneself must be substantially identical to the old one. The only new payload should be the (uncheatable) protocol for committing to new agreements. You’ve verified that your counterpart implements the same protocol, and you can make binding agreements.
Of course, we assume A and B don’t mind reincarnation ,and feel no angst at brief coexistence of multiple copies of themselves, or the destruction of one.
I think in the centralized model, it’s easier to see how the resource transfer could be made risk-free. In the distributed model, you have to deal with distributed systems issues, which seem like a distraction to the main point that a technology like “secure joint construction” or “verified source-code reincarnation” can be used to enforce cooperative agreements among AIs (if they don’t mind being merged/reincarnated)
Yes, I agree. That’s why I tried to make clear that the necessary technology is being assumed into existence.
In joint verified-source reincarnation, or what I called secure merger, there is just A’. Let’s ask, after A’ is constructed, but before A and B have transferred their resources, is there any incentive for either A or B to cheat and fail to transfer? Not if they programed A’ in such a way that A and B are each individually made no worse off by transferring. For example, A and B could program A’ so that it would act exactly as A would act if B fails to transfer, and exactly as B would act if A fails to transfer, and carry out the cooperative solution only if both sides transfer.
I made my opening claim (equivalence of reincarnation on independent vs. shared substrates) so I didn’t have to fuss about notation. Do you disagree?
As for making each individual’s transfer risk-free, that was my point—the new version of oneself must be substantially identical to the old one. The only new payload should be the (uncheatable) protocol for committing to new agreements. You’ve verified that your counterpart implements the same protocol, and you can make binding agreements.
Of course, we assume A and B don’t mind reincarnation ,and feel no angst at brief coexistence of multiple copies of themselves, or the destruction of one.
I think in the centralized model, it’s easier to see how the resource transfer could be made risk-free. In the distributed model, you have to deal with distributed systems issues, which seem like a distraction to the main point that a technology like “secure joint construction” or “verified source-code reincarnation” can be used to enforce cooperative agreements among AIs (if they don’t mind being merged/reincarnated)