Imagine that there is one copy now, that there will be n extra copies made in ten minutes, which will all be deleted in twenty minutes. I am confronted with situations such as “do you want to make this advantageous deal now, or a slightly less/more advantageous deal in 10⁄20 minutes?” By “all copies make the same purely indexical decisions” I would want to delay if, and only if, that is what I would want to do if there were no extra copies made at all. This is only possible if my personal indexical utility is the same throughout the creation and destruction of the other copies. Since no copy is special, all my copies must have the same personal indexical utility, irrespective of the number of copies. So their “shared indexical” utility must be the sum of this.
If I understand this correctly, what you mean is that in a situation where I am given a choice between:
A)1 bar of chocolate now,
B) 2 bars in ten minutes
C) 3 bars in twenty minutes,
If 10 copies of me are made, but they are not “in on the deal” with me (they get no chocolate, no matter what I pick), then instead of giving B 2 utility, I should give it 0.18 utility and prefer A to B. You are right that this seems absurd, and that summing utility instead of averaging it fixes this problem.
However, in situations where the copies are “in on the deal,” and do receive chocolate, the results also seem absurd. Imagine the same situation, except that if I pick B each copy will also get 2 bars of chocolate.
If utilities of each copy is summed, then picking B will result in 22 utility, while picking C will result in 3. This would mean I would select B if 10 copies are made and C if no copies are made. This would also mean that I should be willing to pay 18 chocolate bars for the privilege of having 10 identical copies made who eats a chocolate bar and is then deleted.
This seem absurd to me, however. If given a choice between two chocolate bars, or having one chocolate bar, plus having a million copies created who eat one chocolate bar and are then merged with me, I’ll pick two chocolate bars. It seems to me that any decision theory that claims you should be willing to pay to create exact duplicates of yourself who exist briefly, while having the exact same experiences as you, before being merged back with you, should be rejected.
There is no amount of money I would be willing to pay to create more copies who will have exactly the same experiences as me, providing the alternative is that no copies will be made at all if I don’t pay. (I would be willing to pay to have an identical copy made if the alternative is having a copy who is tortured being made if I don’t pay, or something like that.
Obviously I’m missing something.
Here’s one possible thing that I might be missing: Does this decision theory have anything to say about how many copies we should chose to make, if we have a choice, or does it only apply to situations where a copy is going to be made, whether we like it or not? If that’s the case then it might make sense to prefer B to C when copies are definitely going to be created, but take action to make sure that they are not created so that you are allowed to choose C.
In this view having a copy made essentially changes my utility function in a subtle way, it essentially doubles the strength of all my current preferences, among other things. So I should avoid having large amounts of copies made for the same reason Gandhi should avoid murder pills. This makes sense to me, I want to have backup copies of myself and other such things, but am leery of having a trillion copies a la Robin Hanson.
Other solutions might include modifying the average view in some fashion, for instance, using summative utilities for decisions affecting just you, and average ones for decisions affecting yourself and copies. Or taking a timeless average view and dividing utility by all the copies you will ever have, regardless of whether they exist at the moment or not. (This could potentially lead to creating suffering copies if copies that are suffering even more exist, but we can patch that by evaluating dis utility and utility asymmetrically, so the first is summative and the second is average).
If I understand this correctly, what you mean is that in a situation where I am given a choice between:
A)1 bar of chocolate now,
B) 2 bars in ten minutes
C) 3 bars in twenty minutes,
If 10 copies of me are made, but they are not “in on the deal” with me (they get no chocolate, no matter what I pick), then instead of giving B 2 utility, I should give it 0.18 utility and prefer A to B. You are right that this seems absurd, and that summing utility instead of averaging it fixes this problem.
However, in situations where the copies are “in on the deal,” and do receive chocolate, the results also seem absurd. Imagine the same situation, except that if I pick B each copy will also get 2 bars of chocolate.
If utilities of each copy is summed, then picking B will result in 22 utility, while picking C will result in 3. This would mean I would select B if 10 copies are made and C if no copies are made. This would also mean that I should be willing to pay 18 chocolate bars for the privilege of having 10 identical copies made who eats a chocolate bar and is then deleted.
This seem absurd to me, however. If given a choice between two chocolate bars, or having one chocolate bar, plus having a million copies created who eat one chocolate bar and are then merged with me, I’ll pick two chocolate bars. It seems to me that any decision theory that claims you should be willing to pay to create exact duplicates of yourself who exist briefly, while having the exact same experiences as you, before being merged back with you, should be rejected.
There is no amount of money I would be willing to pay to create more copies who will have exactly the same experiences as me, providing the alternative is that no copies will be made at all if I don’t pay. (I would be willing to pay to have an identical copy made if the alternative is having a copy who is tortured being made if I don’t pay, or something like that.
Obviously I’m missing something.
Here’s one possible thing that I might be missing: Does this decision theory have anything to say about how many copies we should chose to make, if we have a choice, or does it only apply to situations where a copy is going to be made, whether we like it or not? If that’s the case then it might make sense to prefer B to C when copies are definitely going to be created, but take action to make sure that they are not created so that you are allowed to choose C.
In this view having a copy made essentially changes my utility function in a subtle way, it essentially doubles the strength of all my current preferences, among other things. So I should avoid having large amounts of copies made for the same reason Gandhi should avoid murder pills. This makes sense to me, I want to have backup copies of myself and other such things, but am leery of having a trillion copies a la Robin Hanson.
Other solutions might include modifying the average view in some fashion, for instance, using summative utilities for decisions affecting just you, and average ones for decisions affecting yourself and copies. Or taking a timeless average view and dividing utility by all the copies you will ever have, regardless of whether they exist at the moment or not. (This could potentially lead to creating suffering copies if copies that are suffering even more exist, but we can patch that by evaluating dis utility and utility asymmetrically, so the first is summative and the second is average).