Individual verified-source reincarnation and joint verified-source reincarnation on a single computing substrate are equivalent in every way.
Call A+ the reincarnation of A, and B+ the reincarnation of B.
There must be some way of observing that the irrevocable transfer of authority has actually occurred. This means you need some way of trusting promises by all entities that owe some allegiance to A or B. So far the only method proposed to establish trust in one’s decision rule is to submit to verified reincarnation. This may not be to the liking of non-machine intelligence, and at best costly to those involved, if not infeasible due to speed-of-light physical separation.
Further, A+ shouldn’t depart, in any way it wasn’t willing to do unilaterally for its own sake, from the values and decision rules of A, unless there’s a secure atomic transfer of authority from A->A+ and B->B+. I don’t see how such a thing is possible, so A+ really can’t commit to any cooperation until it verifies that B->B+ has occurred.
So, A+ and B+ would differ from A and B only in that their own end of the contract they propose to enter would become active iff the other actually reincarnates (A+ can do this once B->B+, because she knows that B+ will do the same). But it is necessary for one of A+ or B+ to reincarnate first.
This may not be to the liking of non-machine intelligence, and at best costly to those involved, if not infeasible due to speed-of-light physical separation.
Yes, I agree. That’s why I tried to make clear that the necessary technology is being assumed into existence.
But it is necessary for one of A+ or B+ to reincarnate first.
In joint verified-source reincarnation, or what I called secure merger, there is just A’. Let’s ask, after A’ is constructed, but before A and B have transferred their resources, is there any incentive for either A or B to cheat and fail to transfer? Not if they programed A’ in such a way that A and B are each individually made no worse off by transferring. For example, A and B could program A’ so that it would act exactly as A would act if B fails to transfer, and exactly as B would act if A fails to transfer, and carry out the cooperative solution only if both sides transfer.
I made my opening claim (equivalence of reincarnation on independent vs. shared substrates) so I didn’t have to fuss about notation. Do you disagree?
As for making each individual’s transfer risk-free, that was my point—the new version of oneself must be substantially identical to the old one. The only new payload should be the (uncheatable) protocol for committing to new agreements. You’ve verified that your counterpart implements the same protocol, and you can make binding agreements.
Of course, we assume A and B don’t mind reincarnation ,and feel no angst at brief coexistence of multiple copies of themselves, or the destruction of one.
I think in the centralized model, it’s easier to see how the resource transfer could be made risk-free. In the distributed model, you have to deal with distributed systems issues, which seem like a distraction to the main point that a technology like “secure joint construction” or “verified source-code reincarnation” can be used to enforce cooperative agreements among AIs (if they don’t mind being merged/reincarnated)
I agree that it might be nice to transfer gradually, but if you avoid the risk of losing any fraction, then you solve the problem of one-shot transfer also. If you don’t solve that problem, then you still risk losing something to treachery. Maybe you could suggest a concrete protocol for a verified-source reincarnation scenario.
I’m also envisioning some non-divisible resources as well, e.g. privileges and obligations with respect to independent entities (we’ll assume these were crafted to transfer on reincarnation).
Individual verified-source reincarnation and joint verified-source reincarnation on a single computing substrate are equivalent in every way.
Call A+ the reincarnation of A, and B+ the reincarnation of B.
There must be some way of observing that the irrevocable transfer of authority has actually occurred. This means you need some way of trusting promises by all entities that owe some allegiance to A or B. So far the only method proposed to establish trust in one’s decision rule is to submit to verified reincarnation. This may not be to the liking of non-machine intelligence, and at best costly to those involved, if not infeasible due to speed-of-light physical separation.
Further, A+ shouldn’t depart, in any way it wasn’t willing to do unilaterally for its own sake, from the values and decision rules of A, unless there’s a secure atomic transfer of authority from A->A+ and B->B+. I don’t see how such a thing is possible, so A+ really can’t commit to any cooperation until it verifies that B->B+ has occurred.
So, A+ and B+ would differ from A and B only in that their own end of the contract they propose to enter would become active iff the other actually reincarnates (A+ can do this once B->B+, because she knows that B+ will do the same). But it is necessary for one of A+ or B+ to reincarnate first.
“On the count of 3, hang up!”
Yes, I agree. That’s why I tried to make clear that the necessary technology is being assumed into existence.
In joint verified-source reincarnation, or what I called secure merger, there is just A’. Let’s ask, after A’ is constructed, but before A and B have transferred their resources, is there any incentive for either A or B to cheat and fail to transfer? Not if they programed A’ in such a way that A and B are each individually made no worse off by transferring. For example, A and B could program A’ so that it would act exactly as A would act if B fails to transfer, and exactly as B would act if A fails to transfer, and carry out the cooperative solution only if both sides transfer.
I made my opening claim (equivalence of reincarnation on independent vs. shared substrates) so I didn’t have to fuss about notation. Do you disagree?
As for making each individual’s transfer risk-free, that was my point—the new version of oneself must be substantially identical to the old one. The only new payload should be the (uncheatable) protocol for committing to new agreements. You’ve verified that your counterpart implements the same protocol, and you can make binding agreements.
Of course, we assume A and B don’t mind reincarnation ,and feel no angst at brief coexistence of multiple copies of themselves, or the destruction of one.
I think in the centralized model, it’s easier to see how the resource transfer could be made risk-free. In the distributed model, you have to deal with distributed systems issues, which seem like a distraction to the main point that a technology like “secure joint construction” or “verified source-code reincarnation” can be used to enforce cooperative agreements among AIs (if they don’t mind being merged/reincarnated)
The process of transferring resources might be done visibly and gradually, like the iterated donation pact in this comment.
I agree that it might be nice to transfer gradually, but if you avoid the risk of losing any fraction, then you solve the problem of one-shot transfer also. If you don’t solve that problem, then you still risk losing something to treachery. Maybe you could suggest a concrete protocol for a verified-source reincarnation scenario.
I’m also envisioning some non-divisible resources as well, e.g. privileges and obligations with respect to independent entities (we’ll assume these were crafted to transfer on reincarnation).