re “No Long-Run Average Frequency” and “Not Useful to Decision Making”: You say that there is no way to assign a probability to “I am L”, and consequently no “valid strategy” for problems that rely on that information. Consider the following two games:
Game 1: You have been fissioned once. You may say ‘I am L’ and get paid 1000$ if correct, or ‘I am not L’ and get paid 999$ if correct.
Game 2: You have been fissioned twice (with names LL, LR, RL, RR). You may say ‘I am LR’ and get paid 1000$ if correct, or ‘I am not LR’ and get paid 999$ if correct.
What move would you personally actually make in each of these games, and why?
This is what I’d do:
I’d pick ‘I am L’ in the first game and ‘I am not LR’ in the second
I’d justify that by writing down the inequalities “0.5 * 1000 > 0.5 * 999” and “0.25 * 1000 < 0.75 * 999″
I’d use the word “probabilities” to refer to those numbers above that have decimal points
If you disagree on the first or second point (i.e. you would make different moves in the games, or you would justify your moves using different math), I’d love to hear your alternatives. If you disagree only on the third point, then it seems like a disagreement purely over definitions; you are welcome to call those numbers bleggs or something instead if you prefer, but once the games get more complicated and the math gets harder and you need help manipulating your bleggs, I think you’ll find perfectly usable advice in a probability textbook.
Assuming the objective is to maximize my money, there is no good strategy. You can make the decision as you described, but how do you justify it being the correct decision? I either get the money or not as I am either L or not. But there is no explanation as to why. The decimal numbers never appeared for just me.
The value calculated is meaningful if applied to all copies. The decimal numbers are the relative fractions. It is correct to say if every copy makes decisions this way then they will have more money combined. But there is no first-person in this. Why would this decision also be the best for me specifically? There is no reason. Unless we make an additional assumption such as “I am a random sample from these copies.”
Ultimately though, there is some answer to my question “What move would you personally actually make in each of these games, and why?”: whether or not there is a “correct” move or a “mathematically justified” move, etc, there is some move you personally would make. What is it, and why? If you personally would make a different move from me, then I want to know what it is! And if you would make the same move as me and write down the same math as me as your reason, then the only remaining disagreement is that I call that move “correct” while you call it “the move I would make because it somehow makes sense even though it’s not fundamentally correct”, and at that point it’s just definitions arguments.
I would say there is no “why” to which person I am. So there is no way to say which action is right or wrong. I could very well choose to guess “I am not L”. And it would be as good/bad a guess as yours. There is no math to write down at all.
If you say guessing “I am L” is the correct action while guessing “I am not L” is wrong. Then you would need to come up with a reason for it. “I choose to guess I’m not L and you say I am wrong to do so. Then tell me why.” There isn’t any justification. Considering all the copies does not work unless you assume the first person as a random sample.
It sounds like you are misinterpreting my question, since the “why” in it is not “why are you person L or not person L”, it’s “why in the game would you speak the words ‘I am L’ or ‘I am not L’”. Let me try one more time to make the question extremely clear: if you actually played my games, some thoughts (call these X) would actually go through your head, and then some words (call these Y) would actually come out of your mouth. What is Y, and what is X? Whether or not the “correct” move is undefined (I still don’t care to argue definitions), you can’t seriously expect me to believe that X and Y are undefined—I assume you know yourself well enough to know what you personally would actually do. So what are X and Y?
Y=‘I am L’ in game 1 and ‘I am LR’ in game 2.
X=”Hmm, well there’s no law governing which answer is right, so I might as well say the thing that might get me the bigger number of dollars.”
Y=‘I am not L’ in game 1 and ‘I am not LR’ in game 2.
X=”No known branch of math has any relevance here, so when faced with this game (or any similar stupid game with no right answer) I’ll fall back on picking whatever option was stated most recently in the question, since that’s the one I remember hearing better.”
Provided the objective is to maximize my money. There is no way to reason about it. So either of your example answers is fine. It is not more valid/invalid than any other answers.
Personally, I would just always guess a positive answer and forget about it. As it saves more energy. So “I am L”, and “I am LR” to your problems. If you think that is wrong I would like to know why.
Your answer based on expected value could maximize the total money of all copies. (Assuming everyone has the same objective and makes the same decision.) Maximizing the benefit of people similar to me (copies) at the expense of people different from me (the bet offerer) is an alternative objective. People might choose it due to natural feelings, after all, it is a beneficial evolution trait. That’s why this alternative objective seems attractive, especially when there is no valid strategy to maximize my benefit specifically. But as I have said, it does not involve self-locating probability.
By the way I apologize for not directly addressing your points. The reason I’m not talking about indexicals or anything directly is that I think I can demonstrate a probable flaw in your argument while treating it entirely as a black box. The way that works: as an extreme example, imagine that I successfully demonstrated that a) every person on this site including you would, when actually dropped into these games, make the same moves as me, and that b) in order to come up with that answer, every person on this site including you goes through a mental process that looks a whole lot like “0.5 * 1000 > 0.5 * 999” and “0.25 * 1000 < 0.75 * 999″ even if they don’t have any words justifying why they are doing it that way. If that were the case, then I’d posit the following: 1) there is, with extremely high probability, some very reasonable sense in which those moves are “correct”/”mathematically justified”/”good strategies” etc, even if, for any specific one of those terms, we’re not comfortable labeling the moves as such, and therefore 2) with extremely high probability, any theory like yours which suggests that there is no correct move is at least one of a) using a new set of definitions but with no practical difference to actions (e.g. it will refuse to call the moves “correct”, but then will be forced to come up with some new term like “anthropically correct” to explain why they “anthropically recommend” that you make those moves even though they’re certainly not actually “correct”, in which case I don’t care about the wording), and/or b) flawed somewhere, even if I cannot yet figure out which sentence in the argument is wrong.
My reading of dadadarren is that you can use that method to make a decision, but you cannot use that to determine whether it is correct. How would you (the I in that situation) determine that? One can’t. Either it gets 1000 or 999, and it learns whether it is an L or an R but not with which probability. The formula gives an expected value over a lot of such interactions. Which ones count? If only those of the I count, then it will never be any wiser even if it loses or wins all the time—it could just be the lucky ones. Only by comparing to a group of other first persons can you evaluate that—but, as dadadarren says, then it is no longer about the I.
I’m not sure I fully understand the original argument, but let me try. Maybe it’ll clarify it for me too.
You’re right that I would choose L on the same basis you describe. But that’s not a property of the world, it’s just a guess. It’s assuming the conclusion — the assumption that “I” is randomly distributed among the clones. But what if your personal experience is that you always receive “R”? Would you continue guessing “L” after 100 iterations of receiving “R”? Why? How do you prove that that’s the right strategy? What do you say to the person who has diligently guessed L hundreds of times and lost a lot of money? What do you say to them on your tenth time having this conversation?
Is your argument “maybe you’ll get lucky this time”? But you know that’s not true — one clone won’t be lucky. And this strategy feels very unfair to that person.
You could try to argue for “greater good”. But then you’re doing the thing where it’s no longer about the “I”. You’re bringing a group in.
“”″
Game 1 Experimenter: “I’ve implemented the reward system in this little machine in front of you. The machine of course does not actually “know” which of L or R you are; I simply built one machine A which pays out 1000 exactly if the ‘I am L’ button is pressed, and then another identical-looking machine B which pays out 999 exactly if the ‘I am not L’ button is pressed, and then I placed the appropriate machine in front of you and the other one in front of your clone you can see over there. So, which button do you press?”
Fissioned dadadarren: “This is exactly like the hypothetical I was discussing online recently; implementing it using those machines hasn’t changed anything. So there is still no correct answer for the objective of maximizing my money; and I guess my plan will be to...”
Experimenter: “Let me interrupt you for a moment, I decided to add one more rule: I’m going to flip this coin, and if it comes up Heads I’m going to swap the machines in front of you and your other clone. flip; it’s Tails. Ah, I guess nothing changes; you can proceed with your original plan.”
Fissioned dadadarren: “Actually this changes everything—I now just watched that machine in front of me be chosen by true randomness from a set of two machines whose reward structures I know, so I will ignore the anthropic theming of the button labels and just run a standard EV calculation and determine that pressing the ‘I am L’ button is obviously the best choice.”
“”″
Is this how it would go—would watching a coin flip that otherwise does not affect the world change the clone’s calculation on what the correct action is or if a correct action even exists? Because while that’s not quite a logical contradiction, it seems bizarre enough to me that I think it probably indicates an important flaw in the theory.
A lot of this appears to apply to completely ordinary (non-self-localizing) probabilities too? e.g. I flip a coin labeled L and R and hide it in a box in front of you, then put a coin with the opposite side face up in a box in front of Bob. You have to guess what face is on your coin, with payouts as in my game 1. Seems like the clear guess is L. But then
what if your personal experience is that you always receive “R”? Would you continue guessing “L” after 100 iterations of receiving “R”? Why? How do you prove that that’s the right strategy? What do you say to the person who has diligently guessed L hundreds of times and lost a lot of money? What do you say to them on your tenth time having this conversation?
Is your argument “maybe you’ll get lucky this time”? But you know that’s not true — one [of you and Bob] won’t be lucky. And this strategy feels very unfair to that person.
and yet this time it’s all classical probability—you know you’re you, you know Bob is Bob, and you know that the coin flips appearing in front of you are truly random and are unrelated to whether you’re you or Bob (other than that each time you get a flip, Bob gets the opposite result). So does your line of thought apply to this scenario too? If yes, does that mean all of normal probability theory is broken too? If no, which part of the reasoning no longer applies?
I will reply here. Because it is needed to answer the machine experiment you laid out below.
The difference is for random/unknown processes, there is no need to explain why I am this particular person. We can just treat it as something given. So classical probabilities can be used without needing any additional assumptions.
For the fission problem, I cannot keep repeating the experiment and expect the relative frequency of me being L or R to converge on any particular value. To get the relative fraction it has to be calculated from all copies. (or come up with something explaining how come the first-person perspective is a particular person.)
The coin toss problem, on the other hand, I can keep repeating the coin toss and record the outcome. As long as it is a fair coin, as the iterations increase, the relative fractions would approach 1⁄2 for me. So there is no problem saying the probability is half.
As long as we don’t have to reason why the first-person perspective is a particular person, everything is rosy. We can even put the fission experiment and the coin toss together: After every fission, I will be presented with a random toss result. As the experiment goes on I would have seen about equal numbers of Heads and Tails. The probability for Head is still 1⁄2 for post-fission first-person.
For coin tosses, itcould get a long row of just Heads and throw off my payoff. But that does not mean my original strategy is wrong. It is a freakishly small chance event. But if I am LLLLL.....LLLL in a series of fission experiments, I can’t even say that is something with a freakishly small chance. It’s just who I am. What does “It is a small chance event for me to be LLLLLL..” even mean? Some additional assumption explaining the first-person perspective is required.
That is why at the bottom of my post I used the incubator example to contrast the difference between self-locating probabilities and other, regular probabilities about random/unknown processes.
So to answer your machine example, there is no valid strategy for the first case. As it involves self-locating probability. But for the second case, where the machines are randomly assigned, I would press “I am L”, because the probability is equal and it gives 1 dollar more payoff. (My understanding is that even if I am not L, as long as I press that button for that particular machine, it would still give me the 1000 dollars.) This can be checked by repeating the experiment. If a large number of iterations is performed, pressing the “I am L” button will give rewards 1⁄2 the time. So it pressing the other button, but the reward is smaller. So if I want to maximize my money, the strategy is clear.
An incubator could (Case A) create one person each in rooms numbered from 1 to 100. Or an incubator could (Case B) create 100 people then randomly assign them to these rooms. “The probability that I am in room number 53” has no value in the former case. While it has the probability of 1% for the latter case.
Your two cases seem equivalent to me. To find out where we differ, I’ve created 5 versions of your incubator below. The intent is that Incubator1 implements your Case A; each version is in all relevant ways exactly equivalent to the one below it, and then Incubator5 implements your Case B. Which part of the chain do you disagree with? (A ‘character sheet’ contains whatever raw data you need to make a person, such as perhaps a genome. ‘Roll a character sheet’ means randomly fill in that data in some viable way. Assume we can access a source of true randomness for all rolls/shuffles.)
Incubator1:
For each i<=100:
Roll a character sheet
Create that character in room i
Incubator2:
For each i<=100:
Roll a character sheet and add it to a list
For each i<=100:
Create the i’th listed character in room i
Incubator3:
For each i<=100:
Roll a character sheet and add it to a list
Shuffle the list
For each i<=100:
Create the i’th listed character in room i
Incubator4:
For each i<=100:
Roll a character sheet, add it to a list, and create that character in the waiting area
Shuffle the list
For each i<=100:
Push the person corresponding to the i’th listed character sheet into room i
Incubator5:
For each i<=100:
Roll a character sheet, and create that character in the waiting area
Write down a list of the people standing in the waiting area and shuffle it
Case A and B are different for the same reason as above. A needs to explain why a first-person perspective is a particular person while B does not. If you really think of it, Case B is not even an anthropic problem. It is just about a random assignment of rooms. How I am created, who else is put into the rooms doesn’t change anything.
If we think in terms of frequencies, Case B can be quite easily repeated. I can get into similar room assignments with 99 others again and again. The long-run frequency would be about 1% for every room. But Case A, however, is anthropic. For starters repeating it won’t be so simple. A physical person can’t be created multiple times. It can be repeated by procedures similar to the fission experiment. (Instead of 2 copies each experiment spawns 100 copies.) Then for the same reason, there won’t be a long-run frequency for me.
As for the 5 cases you listed, I would say Case 1 and 2 is the same as A, while cases 4&5 the same as B. But for Case 3 it really depends on your metaphysical position of preexistence. It makes sense for us to say “I naturally know I am this particular person” But can we push this identification back further, from the particular person to the particular character sheet? I don’t think there is a solid answer to it.
Some considerations can include: In theory, can 2 people be created from the same character sheet? If so, the identity of preexistence could not be pushed back. Then Case 3 is definitely like Case A. And that is my reading of the problem. However, if you meant a character sheet and physical person has a one-to-one mirroring relationship. Then saying it is the same as Case B and assigning probabilities to it wouldn’t cause any problems either. At least not in any way I have foreseen.
re “No Long-Run Average Frequency” and “Not Useful to Decision Making”: You say that there is no way to assign a probability to “I am L”, and consequently no “valid strategy” for problems that rely on that information. Consider the following two games:
Game 1: You have been fissioned once. You may say ‘I am L’ and get paid 1000$ if correct, or ‘I am not L’ and get paid 999$ if correct. Game 2: You have been fissioned twice (with names LL, LR, RL, RR). You may say ‘I am LR’ and get paid 1000$ if correct, or ‘I am not LR’ and get paid 999$ if correct.
What move would you personally actually make in each of these games, and why?
This is what I’d do:
I’d pick ‘I am L’ in the first game and ‘I am not LR’ in the second
I’d justify that by writing down the inequalities “0.5 * 1000 > 0.5 * 999” and “0.25 * 1000 < 0.75 * 999″
I’d use the word “probabilities” to refer to those numbers above that have decimal points
If you disagree on the first or second point (i.e. you would make different moves in the games, or you would justify your moves using different math), I’d love to hear your alternatives. If you disagree only on the third point, then it seems like a disagreement purely over definitions; you are welcome to call those numbers bleggs or something instead if you prefer, but once the games get more complicated and the math gets harder and you need help manipulating your bleggs, I think you’ll find perfectly usable advice in a probability textbook.
Essentially what Gunnar_Zarncke said.
Assuming the objective is to maximize my money, there is no good strategy. You can make the decision as you described, but how do you justify it being the correct decision? I either get the money or not as I am either L or not. But there is no explanation as to why. The decimal numbers never appeared for just me.
The value calculated is meaningful if applied to all copies. The decimal numbers are the relative fractions. It is correct to say if every copy makes decisions this way then they will have more money combined. But there is no first-person in this. Why would this decision also be the best for me specifically? There is no reason. Unless we make an additional assumption such as “I am a random sample from these copies.”
Ultimately though, there is some answer to my question “What move would you personally actually make in each of these games, and why?”: whether or not there is a “correct” move or a “mathematically justified” move, etc, there is some move you personally would make. What is it, and why? If you personally would make a different move from me, then I want to know what it is! And if you would make the same move as me and write down the same math as me as your reason, then the only remaining disagreement is that I call that move “correct” while you call it “the move I would make because it somehow makes sense even though it’s not fundamentally correct”, and at that point it’s just definitions arguments.
I would say there is no “why” to which person I am. So there is no way to say which action is right or wrong. I could very well choose to guess “I am not L”. And it would be as good/bad a guess as yours. There is no math to write down at all.
If you say guessing “I am L” is the correct action while guessing “I am not L” is wrong. Then you would need to come up with a reason for it. “I choose to guess I’m not L and you say I am wrong to do so. Then tell me why.” There isn’t any justification. Considering all the copies does not work unless you assume the first person as a random sample.
It sounds like you are misinterpreting my question, since the “why” in it is not “why are you person L or not person L”, it’s “why in the game would you speak the words ‘I am L’ or ‘I am not L’”. Let me try one more time to make the question extremely clear: if you actually played my games, some thoughts (call these X) would actually go through your head, and then some words (call these Y) would actually come out of your mouth. What is Y, and what is X? Whether or not the “correct” move is undefined (I still don’t care to argue definitions), you can’t seriously expect me to believe that X and Y are undefined—I assume you know yourself well enough to know what you personally would actually do. So what are X and Y?
Example answers:
Y=‘I am L’ in game 1 and ‘I am LR’ in game 2. X=”Hmm, well there’s no law governing which answer is right, so I might as well say the thing that might get me the bigger number of dollars.”
Y=‘I am not L’ in game 1 and ‘I am not LR’ in game 2. X=”No known branch of math has any relevance here, so when faced with this game (or any similar stupid game with no right answer) I’ll fall back on picking whatever option was stated most recently in the question, since that’s the one I remember hearing better.”
Provided the objective is to maximize my money. There is no way to reason about it. So either of your example answers is fine. It is not more valid/invalid than any other answers.
Personally, I would just always guess a positive answer and forget about it. As it saves more energy. So “I am L”, and “I am LR” to your problems. If you think that is wrong I would like to know why.
Your answer based on expected value could maximize the total money of all copies. (Assuming everyone has the same objective and makes the same decision.) Maximizing the benefit of people similar to me (copies) at the expense of people different from me (the bet offerer) is an alternative objective. People might choose it due to natural feelings, after all, it is a beneficial evolution trait. That’s why this alternative objective seems attractive, especially when there is no valid strategy to maximize my benefit specifically. But as I have said, it does not involve self-locating probability.
You make a good point about the danger of alternate objectives creeping in if the original objective is unsatisfiable; this helps me see why my original thought experiment is not as useful as I’d hoped. What are your thoughts on this one? https://www.lesswrong.com/posts/heSbtt29bv5KRoyZa/the-first-person-perspective-is-not-a-random-sample?commentId=75ie9LnZgBEa66Kp8
No problem. I am actually very happy we can get some agreement. Which is not very often in discussions of anthropics.
By the way I apologize for not directly addressing your points. The reason I’m not talking about indexicals or anything directly is that I think I can demonstrate a probable flaw in your argument while treating it entirely as a black box. The way that works: as an extreme example, imagine that I successfully demonstrated that a) every person on this site including you would, when actually dropped into these games, make the same moves as me, and that b) in order to come up with that answer, every person on this site including you goes through a mental process that looks a whole lot like “0.5 * 1000 > 0.5 * 999” and “0.25 * 1000 < 0.75 * 999″ even if they don’t have any words justifying why they are doing it that way. If that were the case, then I’d posit the following: 1) there is, with extremely high probability, some very reasonable sense in which those moves are “correct”/”mathematically justified”/”good strategies” etc, even if, for any specific one of those terms, we’re not comfortable labeling the moves as such, and therefore 2) with extremely high probability, any theory like yours which suggests that there is no correct move is at least one of a) using a new set of definitions but with no practical difference to actions (e.g. it will refuse to call the moves “correct”, but then will be forced to come up with some new term like “anthropically correct” to explain why they “anthropically recommend” that you make those moves even though they’re certainly not actually “correct”, in which case I don’t care about the wording), and/or b) flawed somewhere, even if I cannot yet figure out which sentence in the argument is wrong.
My reading of dadadarren is that you can use that method to make a decision, but you cannot use that to determine whether it is correct. How would you (the I in that situation) determine that? One can’t. Either it gets 1000 or 999, and it learns whether it is an L or an R but not with which probability. The formula gives an expected value over a lot of such interactions. Which ones count? If only those of the I count, then it will never be any wiser even if it loses or wins all the time—it could just be the lucky ones. Only by comparing to a group of other first persons can you evaluate that—but, as dadadarren says, then it is no longer about the I.
I’m not sure I fully understand the original argument, but let me try. Maybe it’ll clarify it for me too.
You’re right that I would choose L on the same basis you describe. But that’s not a property of the world, it’s just a guess. It’s assuming the conclusion — the assumption that “I” is randomly distributed among the clones. But what if your personal experience is that you always receive “R”? Would you continue guessing “L” after 100 iterations of receiving “R”? Why? How do you prove that that’s the right strategy? What do you say to the person who has diligently guessed L hundreds of times and lost a lot of money? What do you say to them on your tenth time having this conversation?
Is your argument “maybe you’ll get lucky this time”? But you know that’s not true — one clone won’t be lucky. And this strategy feels very unfair to that person.
You could try to argue for “greater good”. But then you’re doing the thing where it’s no longer about the “I”. You’re bringing a group in.
Also am I modelling dadadarren correctly here:
“”″ Game 1 Experimenter: “I’ve implemented the reward system in this little machine in front of you. The machine of course does not actually “know” which of L or R you are; I simply built one machine A which pays out 1000 exactly if the ‘I am L’ button is pressed, and then another identical-looking machine B which pays out 999 exactly if the ‘I am not L’ button is pressed, and then I placed the appropriate machine in front of you and the other one in front of your clone you can see over there. So, which button do you press?”
Fissioned dadadarren: “This is exactly like the hypothetical I was discussing online recently; implementing it using those machines hasn’t changed anything. So there is still no correct answer for the objective of maximizing my money; and I guess my plan will be to...”
Experimenter: “Let me interrupt you for a moment, I decided to add one more rule: I’m going to flip this coin, and if it comes up Heads I’m going to swap the machines in front of you and your other clone. flip; it’s Tails. Ah, I guess nothing changes; you can proceed with your original plan.”
Fissioned dadadarren: “Actually this changes everything—I now just watched that machine in front of me be chosen by true randomness from a set of two machines whose reward structures I know, so I will ignore the anthropic theming of the button labels and just run a standard EV calculation and determine that pressing the ‘I am L’ button is obviously the best choice.” “”″
Is this how it would go—would watching a coin flip that otherwise does not affect the world change the clone’s calculation on what the correct action is or if a correct action even exists? Because while that’s not quite a logical contradiction, it seems bizarre enough to me that I think it probably indicates an important flaw in the theory.
A lot of this appears to apply to completely ordinary (non-self-localizing) probabilities too? e.g. I flip a coin labeled L and R and hide it in a box in front of you, then put a coin with the opposite side face up in a box in front of Bob. You have to guess what face is on your coin, with payouts as in my game 1. Seems like the clear guess is L. But then
and yet this time it’s all classical probability—you know you’re you, you know Bob is Bob, and you know that the coin flips appearing in front of you are truly random and are unrelated to whether you’re you or Bob (other than that each time you get a flip, Bob gets the opposite result). So does your line of thought apply to this scenario too? If yes, does that mean all of normal probability theory is broken too? If no, which part of the reasoning no longer applies?
I will reply here. Because it is needed to answer the machine experiment you laid out below.
The difference is for random/unknown processes, there is no need to explain why I am this particular person. We can just treat it as something given. So classical probabilities can be used without needing any additional assumptions.
For the fission problem, I cannot keep repeating the experiment and expect the relative frequency of me being L or R to converge on any particular value. To get the relative fraction it has to be calculated from all copies. (or come up with something explaining how come the first-person perspective is a particular person.)
The coin toss problem, on the other hand, I can keep repeating the coin toss and record the outcome. As long as it is a fair coin, as the iterations increase, the relative fractions would approach 1⁄2 for me. So there is no problem saying the probability is half.
As long as we don’t have to reason why the first-person perspective is a particular person, everything is rosy. We can even put the fission experiment and the coin toss together: After every fission, I will be presented with a random toss result. As the experiment goes on I would have seen about equal numbers of Heads and Tails. The probability for Head is still 1⁄2 for post-fission first-person.
For coin tosses, it could get a long row of just Heads and throw off my payoff. But that does not mean my original strategy is wrong. It is a freakishly small chance event. But if I am LLLLL.....LLLL in a series of fission experiments, I can’t even say that is something with a freakishly small chance. It’s just who I am. What does “It is a small chance event for me to be LLLLLL..” even mean? Some additional assumption explaining the first-person perspective is required.
That is why at the bottom of my post I used the incubator example to contrast the difference between self-locating probabilities and other, regular probabilities about random/unknown processes.
So to answer your machine example, there is no valid strategy for the first case. As it involves self-locating probability. But for the second case, where the machines are randomly assigned, I would press “I am L”, because the probability is equal and it gives 1 dollar more payoff. (My understanding is that even if I am not L, as long as I press that button for that particular machine, it would still give me the 1000 dollars.) This can be checked by repeating the experiment. If a large number of iterations is performed, pressing the “I am L” button will give rewards 1⁄2 the time. So it pressing the other button, but the reward is smaller. So if I want to maximize my money, the strategy is clear.
[case designations added:]
Your two cases seem equivalent to me. To find out where we differ, I’ve created 5 versions of your incubator below. The intent is that Incubator1 implements your Case A; each version is in all relevant ways exactly equivalent to the one below it, and then Incubator5 implements your Case B. Which part of the chain do you disagree with? (A ‘character sheet’ contains whatever raw data you need to make a person, such as perhaps a genome. ‘Roll a character sheet’ means randomly fill in that data in some viable way. Assume we can access a source of true randomness for all rolls/shuffles.)
Incubator1:
For each i<=100:
Roll a character sheet
Create that character in room i
Incubator2:
For each i<=100:
Roll a character sheet and add it to a list
For each i<=100:
Create the i’th listed character in room i
Incubator3:
For each i<=100:
Roll a character sheet and add it to a list
Shuffle the list
For each i<=100:
Create the i’th listed character in room i
Incubator4:
For each i<=100:
Roll a character sheet, add it to a list, and create that character in the waiting area
Shuffle the list
For each i<=100:
Push the person corresponding to the i’th listed character sheet into room i
Incubator5:
For each i<=100:
Roll a character sheet, and create that character in the waiting area
Write down a list of the people standing in the waiting area and shuffle it
For each i<=100:
Push the i’th listed person into room i
Case A and B are different for the same reason as above. A needs to explain why a first-person perspective is a particular person while B does not. If you really think of it, Case B is not even an anthropic problem. It is just about a random assignment of rooms. How I am created, who else is put into the rooms doesn’t change anything.
If we think in terms of frequencies, Case B can be quite easily repeated. I can get into similar room assignments with 99 others again and again. The long-run frequency would be about 1% for every room. But Case A, however, is anthropic. For starters repeating it won’t be so simple. A physical person can’t be created multiple times. It can be repeated by procedures similar to the fission experiment. (Instead of 2 copies each experiment spawns 100 copies.) Then for the same reason, there won’t be a long-run frequency for me.
As for the 5 cases you listed, I would say Case 1 and 2 is the same as A, while cases 4&5 the same as B. But for Case 3 it really depends on your metaphysical position of preexistence. It makes sense for us to say “I naturally know I am this particular person” But can we push this identification back further, from the particular person to the particular character sheet? I don’t think there is a solid answer to it.
Some considerations can include: In theory, can 2 people be created from the same character sheet? If so, the identity of preexistence could not be pushed back. Then Case 3 is definitely like Case A. And that is my reading of the problem. However, if you meant a character sheet and physical person has a one-to-one mirroring relationship. Then saying it is the same as Case B and assigning probabilities to it wouldn’t cause any problems either. At least not in any way I have foreseen.