Although selfishness w.r.t. copies is a totally okay preference structure, rational agents (with a world-model like we have, and no preferences explicitly favoring conflict between copies) will want to precommit or self-modify so that their causal descendants will cooperate non-selfishly.
In fact, if there is a period where the copies don’t have distinguishing indexical information that greatly uncorrelates their decision algorithm, copies will even do the precommitting themselves.
Therefore, upon waking up and learning that I am a copy, but before learning much more, I will attempt to sign a contract with a bystander stating that if I do not act altruistically towards my other copies who have signed similar contracts, I have to pay them my life savings.
If signing a contract was all that we needed to coordinate well, we would already be coordinating as much as is useful now. We already have good strong reasons to want to coordinate for mutual benefit.
This is a good point—one could already sign a future-altruism-contract with someone, and it would already be an expected gain if it worked. But we only see approximations to this, like insurance or marriage. So unless my copies are enough more trustworthy and thoughtful of me to make this work, maybe something on the more efficacious self-modification end of the spectrum is actually necessary.
We don’t have the identity and value overlap with others that we’d have with copies. The contract would just be formalizing it. I think it’s a silly way of formalizing it. I respect my copies’ right to drift differently than I do, and will then cease cooperating as absolutely. I certainly don’t want to lose all of my assets in this case!
Moreover, when copying becomes the primary means of reproduction, caring for one’s copies becomes the ultimate in kin-selection. That puts a lot of evolutionary pressure on to favoring copy-cooperation. Imagine how siblings would care for each other if identical twins (triplets, N-tuples) were the norm.
Funny timing! Or, good Baader-Meinhoffing :P
Although selfishness w.r.t. copies is a totally okay preference structure, rational agents (with a world-model like we have, and no preferences explicitly favoring conflict between copies) will want to precommit or self-modify so that their causal descendants will cooperate non-selfishly.
In fact, if there is a period where the copies don’t have distinguishing indexical information that greatly uncorrelates their decision algorithm, copies will even do the precommitting themselves.
Therefore, upon waking up and learning that I am a copy, but before learning much more, I will attempt to sign a contract with a bystander stating that if I do not act altruistically towards my other copies who have signed similar contracts, I have to pay them my life savings.
If signing a contract was all that we needed to coordinate well, we would already be coordinating as much as is useful now. We already have good strong reasons to want to coordinate for mutual benefit.
This is a good point—one could already sign a future-altruism-contract with someone, and it would already be an expected gain if it worked. But we only see approximations to this, like insurance or marriage. So unless my copies are enough more trustworthy and thoughtful of me to make this work, maybe something on the more efficacious self-modification end of the spectrum is actually necessary.
We don’t have the identity and value overlap with others that we’d have with copies. The contract would just be formalizing it. I think it’s a silly way of formalizing it. I respect my copies’ right to drift differently than I do, and will then cease cooperating as absolutely. I certainly don’t want to lose all of my assets in this case!
Moreover, when copying becomes the primary means of reproduction, caring for one’s copies becomes the ultimate in kin-selection. That puts a lot of evolutionary pressure on to favoring copy-cooperation. Imagine how siblings would care for each other if identical twins (triplets, N-tuples) were the norm.