I’m not sure I fully understand the original argument, but let me try. Maybe it’ll clarify it for me too.
You’re right that I would choose L on the same basis you describe. But that’s not a property of the world, it’s just a guess. It’s assuming the conclusion — the assumption that “I” is randomly distributed among the clones. But what if your personal experience is that you always receive “R”? Would you continue guessing “L” after 100 iterations of receiving “R”? Why? How do you prove that that’s the right strategy? What do you say to the person who has diligently guessed L hundreds of times and lost a lot of money? What do you say to them on your tenth time having this conversation?
Is your argument “maybe you’ll get lucky this time”? But you know that’s not true — one clone won’t be lucky. And this strategy feels very unfair to that person.
You could try to argue for “greater good”. But then you’re doing the thing where it’s no longer about the “I”. You’re bringing a group in.
“”″
Game 1 Experimenter: “I’ve implemented the reward system in this little machine in front of you. The machine of course does not actually “know” which of L or R you are; I simply built one machine A which pays out 1000 exactly if the ‘I am L’ button is pressed, and then another identical-looking machine B which pays out 999 exactly if the ‘I am not L’ button is pressed, and then I placed the appropriate machine in front of you and the other one in front of your clone you can see over there. So, which button do you press?”
Fissioned dadadarren: “This is exactly like the hypothetical I was discussing online recently; implementing it using those machines hasn’t changed anything. So there is still no correct answer for the objective of maximizing my money; and I guess my plan will be to...”
Experimenter: “Let me interrupt you for a moment, I decided to add one more rule: I’m going to flip this coin, and if it comes up Heads I’m going to swap the machines in front of you and your other clone. flip; it’s Tails. Ah, I guess nothing changes; you can proceed with your original plan.”
Fissioned dadadarren: “Actually this changes everything—I now just watched that machine in front of me be chosen by true randomness from a set of two machines whose reward structures I know, so I will ignore the anthropic theming of the button labels and just run a standard EV calculation and determine that pressing the ‘I am L’ button is obviously the best choice.”
“”″
Is this how it would go—would watching a coin flip that otherwise does not affect the world change the clone’s calculation on what the correct action is or if a correct action even exists? Because while that’s not quite a logical contradiction, it seems bizarre enough to me that I think it probably indicates an important flaw in the theory.
A lot of this appears to apply to completely ordinary (non-self-localizing) probabilities too? e.g. I flip a coin labeled L and R and hide it in a box in front of you, then put a coin with the opposite side face up in a box in front of Bob. You have to guess what face is on your coin, with payouts as in my game 1. Seems like the clear guess is L. But then
what if your personal experience is that you always receive “R”? Would you continue guessing “L” after 100 iterations of receiving “R”? Why? How do you prove that that’s the right strategy? What do you say to the person who has diligently guessed L hundreds of times and lost a lot of money? What do you say to them on your tenth time having this conversation?
Is your argument “maybe you’ll get lucky this time”? But you know that’s not true — one [of you and Bob] won’t be lucky. And this strategy feels very unfair to that person.
and yet this time it’s all classical probability—you know you’re you, you know Bob is Bob, and you know that the coin flips appearing in front of you are truly random and are unrelated to whether you’re you or Bob (other than that each time you get a flip, Bob gets the opposite result). So does your line of thought apply to this scenario too? If yes, does that mean all of normal probability theory is broken too? If no, which part of the reasoning no longer applies?
I will reply here. Because it is needed to answer the machine experiment you laid out below.
The difference is for random/unknown processes, there is no need to explain why I am this particular person. We can just treat it as something given. So classical probabilities can be used without needing any additional assumptions.
For the fission problem, I cannot keep repeating the experiment and expect the relative frequency of me being L or R to converge on any particular value. To get the relative fraction it has to be calculated from all copies. (or come up with something explaining how come the first-person perspective is a particular person.)
The coin toss problem, on the other hand, I can keep repeating the coin toss and record the outcome. As long as it is a fair coin, as the iterations increase, the relative fractions would approach 1⁄2 for me. So there is no problem saying the probability is half.
As long as we don’t have to reason why the first-person perspective is a particular person, everything is rosy. We can even put the fission experiment and the coin toss together: After every fission, I will be presented with a random toss result. As the experiment goes on I would have seen about equal numbers of Heads and Tails. The probability for Head is still 1⁄2 for post-fission first-person.
For coin tosses, itcould get a long row of just Heads and throw off my payoff. But that does not mean my original strategy is wrong. It is a freakishly small chance event. But if I am LLLLL.....LLLL in a series of fission experiments, I can’t even say that is something with a freakishly small chance. It’s just who I am. What does “It is a small chance event for me to be LLLLLL..” even mean? Some additional assumption explaining the first-person perspective is required.
That is why at the bottom of my post I used the incubator example to contrast the difference between self-locating probabilities and other, regular probabilities about random/unknown processes.
So to answer your machine example, there is no valid strategy for the first case. As it involves self-locating probability. But for the second case, where the machines are randomly assigned, I would press “I am L”, because the probability is equal and it gives 1 dollar more payoff. (My understanding is that even if I am not L, as long as I press that button for that particular machine, it would still give me the 1000 dollars.) This can be checked by repeating the experiment. If a large number of iterations is performed, pressing the “I am L” button will give rewards 1⁄2 the time. So it pressing the other button, but the reward is smaller. So if I want to maximize my money, the strategy is clear.
An incubator could (Case A) create one person each in rooms numbered from 1 to 100. Or an incubator could (Case B) create 100 people then randomly assign them to these rooms. “The probability that I am in room number 53” has no value in the former case. While it has the probability of 1% for the latter case.
Your two cases seem equivalent to me. To find out where we differ, I’ve created 5 versions of your incubator below. The intent is that Incubator1 implements your Case A; each version is in all relevant ways exactly equivalent to the one below it, and then Incubator5 implements your Case B. Which part of the chain do you disagree with? (A ‘character sheet’ contains whatever raw data you need to make a person, such as perhaps a genome. ‘Roll a character sheet’ means randomly fill in that data in some viable way. Assume we can access a source of true randomness for all rolls/shuffles.)
Incubator1:
For each i<=100:
Roll a character sheet
Create that character in room i
Incubator2:
For each i<=100:
Roll a character sheet and add it to a list
For each i<=100:
Create the i’th listed character in room i
Incubator3:
For each i<=100:
Roll a character sheet and add it to a list
Shuffle the list
For each i<=100:
Create the i’th listed character in room i
Incubator4:
For each i<=100:
Roll a character sheet, add it to a list, and create that character in the waiting area
Shuffle the list
For each i<=100:
Push the person corresponding to the i’th listed character sheet into room i
Incubator5:
For each i<=100:
Roll a character sheet, and create that character in the waiting area
Write down a list of the people standing in the waiting area and shuffle it
Case A and B are different for the same reason as above. A needs to explain why a first-person perspective is a particular person while B does not. If you really think of it, Case B is not even an anthropic problem. It is just about a random assignment of rooms. How I am created, who else is put into the rooms doesn’t change anything.
If we think in terms of frequencies, Case B can be quite easily repeated. I can get into similar room assignments with 99 others again and again. The long-run frequency would be about 1% for every room. But Case A, however, is anthropic. For starters repeating it won’t be so simple. A physical person can’t be created multiple times. It can be repeated by procedures similar to the fission experiment. (Instead of 2 copies each experiment spawns 100 copies.) Then for the same reason, there won’t be a long-run frequency for me.
As for the 5 cases you listed, I would say Case 1 and 2 is the same as A, while cases 4&5 the same as B. But for Case 3 it really depends on your metaphysical position of preexistence. It makes sense for us to say “I naturally know I am this particular person” But can we push this identification back further, from the particular person to the particular character sheet? I don’t think there is a solid answer to it.
Some considerations can include: In theory, can 2 people be created from the same character sheet? If so, the identity of preexistence could not be pushed back. Then Case 3 is definitely like Case A. And that is my reading of the problem. However, if you meant a character sheet and physical person has a one-to-one mirroring relationship. Then saying it is the same as Case B and assigning probabilities to it wouldn’t cause any problems either. At least not in any way I have foreseen.
I’m not sure I fully understand the original argument, but let me try. Maybe it’ll clarify it for me too.
You’re right that I would choose L on the same basis you describe. But that’s not a property of the world, it’s just a guess. It’s assuming the conclusion — the assumption that “I” is randomly distributed among the clones. But what if your personal experience is that you always receive “R”? Would you continue guessing “L” after 100 iterations of receiving “R”? Why? How do you prove that that’s the right strategy? What do you say to the person who has diligently guessed L hundreds of times and lost a lot of money? What do you say to them on your tenth time having this conversation?
Is your argument “maybe you’ll get lucky this time”? But you know that’s not true — one clone won’t be lucky. And this strategy feels very unfair to that person.
You could try to argue for “greater good”. But then you’re doing the thing where it’s no longer about the “I”. You’re bringing a group in.
Also am I modelling dadadarren correctly here:
“”″ Game 1 Experimenter: “I’ve implemented the reward system in this little machine in front of you. The machine of course does not actually “know” which of L or R you are; I simply built one machine A which pays out 1000 exactly if the ‘I am L’ button is pressed, and then another identical-looking machine B which pays out 999 exactly if the ‘I am not L’ button is pressed, and then I placed the appropriate machine in front of you and the other one in front of your clone you can see over there. So, which button do you press?”
Fissioned dadadarren: “This is exactly like the hypothetical I was discussing online recently; implementing it using those machines hasn’t changed anything. So there is still no correct answer for the objective of maximizing my money; and I guess my plan will be to...”
Experimenter: “Let me interrupt you for a moment, I decided to add one more rule: I’m going to flip this coin, and if it comes up Heads I’m going to swap the machines in front of you and your other clone. flip; it’s Tails. Ah, I guess nothing changes; you can proceed with your original plan.”
Fissioned dadadarren: “Actually this changes everything—I now just watched that machine in front of me be chosen by true randomness from a set of two machines whose reward structures I know, so I will ignore the anthropic theming of the button labels and just run a standard EV calculation and determine that pressing the ‘I am L’ button is obviously the best choice.” “”″
Is this how it would go—would watching a coin flip that otherwise does not affect the world change the clone’s calculation on what the correct action is or if a correct action even exists? Because while that’s not quite a logical contradiction, it seems bizarre enough to me that I think it probably indicates an important flaw in the theory.
A lot of this appears to apply to completely ordinary (non-self-localizing) probabilities too? e.g. I flip a coin labeled L and R and hide it in a box in front of you, then put a coin with the opposite side face up in a box in front of Bob. You have to guess what face is on your coin, with payouts as in my game 1. Seems like the clear guess is L. But then
and yet this time it’s all classical probability—you know you’re you, you know Bob is Bob, and you know that the coin flips appearing in front of you are truly random and are unrelated to whether you’re you or Bob (other than that each time you get a flip, Bob gets the opposite result). So does your line of thought apply to this scenario too? If yes, does that mean all of normal probability theory is broken too? If no, which part of the reasoning no longer applies?
I will reply here. Because it is needed to answer the machine experiment you laid out below.
The difference is for random/unknown processes, there is no need to explain why I am this particular person. We can just treat it as something given. So classical probabilities can be used without needing any additional assumptions.
For the fission problem, I cannot keep repeating the experiment and expect the relative frequency of me being L or R to converge on any particular value. To get the relative fraction it has to be calculated from all copies. (or come up with something explaining how come the first-person perspective is a particular person.)
The coin toss problem, on the other hand, I can keep repeating the coin toss and record the outcome. As long as it is a fair coin, as the iterations increase, the relative fractions would approach 1⁄2 for me. So there is no problem saying the probability is half.
As long as we don’t have to reason why the first-person perspective is a particular person, everything is rosy. We can even put the fission experiment and the coin toss together: After every fission, I will be presented with a random toss result. As the experiment goes on I would have seen about equal numbers of Heads and Tails. The probability for Head is still 1⁄2 for post-fission first-person.
For coin tosses, it could get a long row of just Heads and throw off my payoff. But that does not mean my original strategy is wrong. It is a freakishly small chance event. But if I am LLLLL.....LLLL in a series of fission experiments, I can’t even say that is something with a freakishly small chance. It’s just who I am. What does “It is a small chance event for me to be LLLLLL..” even mean? Some additional assumption explaining the first-person perspective is required.
That is why at the bottom of my post I used the incubator example to contrast the difference between self-locating probabilities and other, regular probabilities about random/unknown processes.
So to answer your machine example, there is no valid strategy for the first case. As it involves self-locating probability. But for the second case, where the machines are randomly assigned, I would press “I am L”, because the probability is equal and it gives 1 dollar more payoff. (My understanding is that even if I am not L, as long as I press that button for that particular machine, it would still give me the 1000 dollars.) This can be checked by repeating the experiment. If a large number of iterations is performed, pressing the “I am L” button will give rewards 1⁄2 the time. So it pressing the other button, but the reward is smaller. So if I want to maximize my money, the strategy is clear.
[case designations added:]
Your two cases seem equivalent to me. To find out where we differ, I’ve created 5 versions of your incubator below. The intent is that Incubator1 implements your Case A; each version is in all relevant ways exactly equivalent to the one below it, and then Incubator5 implements your Case B. Which part of the chain do you disagree with? (A ‘character sheet’ contains whatever raw data you need to make a person, such as perhaps a genome. ‘Roll a character sheet’ means randomly fill in that data in some viable way. Assume we can access a source of true randomness for all rolls/shuffles.)
Incubator1:
For each i<=100:
Roll a character sheet
Create that character in room i
Incubator2:
For each i<=100:
Roll a character sheet and add it to a list
For each i<=100:
Create the i’th listed character in room i
Incubator3:
For each i<=100:
Roll a character sheet and add it to a list
Shuffle the list
For each i<=100:
Create the i’th listed character in room i
Incubator4:
For each i<=100:
Roll a character sheet, add it to a list, and create that character in the waiting area
Shuffle the list
For each i<=100:
Push the person corresponding to the i’th listed character sheet into room i
Incubator5:
For each i<=100:
Roll a character sheet, and create that character in the waiting area
Write down a list of the people standing in the waiting area and shuffle it
For each i<=100:
Push the i’th listed person into room i
Case A and B are different for the same reason as above. A needs to explain why a first-person perspective is a particular person while B does not. If you really think of it, Case B is not even an anthropic problem. It is just about a random assignment of rooms. How I am created, who else is put into the rooms doesn’t change anything.
If we think in terms of frequencies, Case B can be quite easily repeated. I can get into similar room assignments with 99 others again and again. The long-run frequency would be about 1% for every room. But Case A, however, is anthropic. For starters repeating it won’t be so simple. A physical person can’t be created multiple times. It can be repeated by procedures similar to the fission experiment. (Instead of 2 copies each experiment spawns 100 copies.) Then for the same reason, there won’t be a long-run frequency for me.
As for the 5 cases you listed, I would say Case 1 and 2 is the same as A, while cases 4&5 the same as B. But for Case 3 it really depends on your metaphysical position of preexistence. It makes sense for us to say “I naturally know I am this particular person” But can we push this identification back further, from the particular person to the particular character sheet? I don’t think there is a solid answer to it.
Some considerations can include: In theory, can 2 people be created from the same character sheet? If so, the identity of preexistence could not be pushed back. Then Case 3 is definitely like Case A. And that is my reading of the problem. However, if you meant a character sheet and physical person has a one-to-one mirroring relationship. Then saying it is the same as Case B and assigning probabilities to it wouldn’t cause any problems either. At least not in any way I have foreseen.