This is precisely the sort of thing I don’t want to do, as anthropic probabilities are not agreed upon. For instance:
There is a .25 chance that the coin will land heads and that you will exist, a .25 chance that the coin will land heads and that you will not exist, and a .5 chance that the coin will land tails and you will exist.
Replace the “you will exist” with “A will exist”, and rewrite the ending to be “and a .5 chance that the coin will land tails and A will exist. Thus a .25 chance that the coin will land tails, A will exist, and you will be A. (in the heads world, A exists → you are A)” But is this the right way to reason?
It’s because questions like that are so confused that I used this approach.
Ah, so you’re considering A and A’ to be part of the same reference class in SSA.
It’s because questions like that are so confused that I used this approach.
I can’t even figure out what your approach is. How are you justifying these calculations (which I’ve fixed in the quote, I think. At least, if you actually wanted to do what you originally wrote instead, you have even more explaining to do.)?
Then if the coupon is priced at £0.60, something quite interesting happens. If the agents do not believe they are linked, they will refuse the offer: their expected returns are 0.5(-0.6 + (1-0.6)) = −0.1 and −0.1-0.05=-0.15 respectively. If however they believe their decisions are linked, they will calculate the expected return from buying the coupon as 0.33 (-0.60 + 2(1-0.60)) = 0.067 and 0.067-0.05 = 0.017 respectively.
Ah, so you’re considering A and A’ to be part of the same reference class in SSA.
I could be. The whole mess of reference classes is one of the problems in SSA.
As for the calculations: assume you are linked, so you and the other agent will make the same decision (if there is another agent). If you do not have the penalty from trade, “buy the coupon for 0.60” nets you −0.6 in the heads world, nets you personally 1-0.4 in the tails world, and nets the other agent in the tails world 1-0.4 (you do not care about their pain from trading, since you are selfless, not altruistic). You both are selfless and in agreement with this money, so the cash adds up in the tails world: 2(1-0.4). Then plugging in probabilities gives 0.067.
If you have the penalty from trade, simply subtract it from all your gains (again, the penalty from trade is only yours, and is not shared).
If you assume you are not linked, then you do not claim the extra 1-0.4 of the other agent as being part of your achievement, so simply get 0.5(-0.6 + (1-0.6)) = −0.1, plus the penalty from trade.
Oh, I see. So you are assuming these utility functions:
A: sum of profits for all copies of A or A’, not counting (A’)’s trade penalty.
A’: sum of profits for all copies of A or A’, minus .05 if this particular agent trades.
Now that I know what you meant, I can even tell that your original text implies these utility functions, but it would have helped if you had been more explicit. I had jumped to the conclusion that both agents were selfish when I noticed that A did not take (A’)’s trade penalty into account. Anyway, your original calculation appears to be correct using ADT and those utility functions, so you can disregard my attempted corrections. I’m assuming that when you said 1-0.4 in your reply, you meant 1-0.6.
This is precisely the sort of thing I don’t want to do, as anthropic probabilities are not agreed upon. For instance:
Replace the “you will exist” with “A will exist”, and rewrite the ending to be “and a .5 chance that the coin will land tails and A will exist. Thus a .25 chance that the coin will land tails, A will exist, and you will be A. (in the heads world, A exists → you are A)” But is this the right way to reason?
It’s because questions like that are so confused that I used this approach.
Ah, so you’re considering A and A’ to be part of the same reference class in SSA.
I can’t even figure out what your approach is. How are you justifying these calculations (which I’ve fixed in the quote, I think. At least, if you actually wanted to do what you originally wrote instead, you have even more explaining to do.)?
I could be. The whole mess of reference classes is one of the problems in SSA.
As for the calculations: assume you are linked, so you and the other agent will make the same decision (if there is another agent). If you do not have the penalty from trade, “buy the coupon for 0.60” nets you −0.6 in the heads world, nets you personally 1-0.4 in the tails world, and nets the other agent in the tails world 1-0.4 (you do not care about their pain from trading, since you are selfless, not altruistic). You both are selfless and in agreement with this money, so the cash adds up in the tails world: 2(1-0.4). Then plugging in probabilities gives 0.067.
If you have the penalty from trade, simply subtract it from all your gains (again, the penalty from trade is only yours, and is not shared).
If you assume you are not linked, then you do not claim the extra 1-0.4 of the other agent as being part of your achievement, so simply get 0.5(-0.6 + (1-0.6)) = −0.1, plus the penalty from trade.
Oh, I see. So you are assuming these utility functions:
A: sum of profits for all copies of A or A’, not counting (A’)’s trade penalty.
A’: sum of profits for all copies of A or A’, minus .05 if this particular agent trades.
Now that I know what you meant, I can even tell that your original text implies these utility functions, but it would have helped if you had been more explicit. I had jumped to the conclusion that both agents were selfish when I noticed that A did not take (A’)’s trade penalty into account. Anyway, your original calculation appears to be correct using ADT and those utility functions, so you can disregard my attempted corrections. I’m assuming that when you said 1-0.4 in your reply, you meant 1-0.6.
Thank you, that is a very useful comment, I will try and clarify in the rewrite.