Y=‘I am L’ in game 1 and ‘I am LR’ in game 2.
X=”Hmm, well there’s no law governing which answer is right, so I might as well say the thing that might get me the bigger number of dollars.”
Y=‘I am not L’ in game 1 and ‘I am not LR’ in game 2.
X=”No known branch of math has any relevance here, so when faced with this game (or any similar stupid game with no right answer) I’ll fall back on picking whatever option was stated most recently in the question, since that’s the one I remember hearing better.”
Provided the objective is to maximize my money. There is no way to reason about it. So either of your example answers is fine. It is not more valid/invalid than any other answers.
Personally, I would just always guess a positive answer and forget about it. As it saves more energy. So “I am L”, and “I am LR” to your problems. If you think that is wrong I would like to know why.
Your answer based on expected value could maximize the total money of all copies. (Assuming everyone has the same objective and makes the same decision.) Maximizing the benefit of people similar to me (copies) at the expense of people different from me (the bet offerer) is an alternative objective. People might choose it due to natural feelings, after all, it is a beneficial evolution trait. That’s why this alternative objective seems attractive, especially when there is no valid strategy to maximize my benefit specifically. But as I have said, it does not involve self-locating probability.
Example answers:
Y=‘I am L’ in game 1 and ‘I am LR’ in game 2. X=”Hmm, well there’s no law governing which answer is right, so I might as well say the thing that might get me the bigger number of dollars.”
Y=‘I am not L’ in game 1 and ‘I am not LR’ in game 2. X=”No known branch of math has any relevance here, so when faced with this game (or any similar stupid game with no right answer) I’ll fall back on picking whatever option was stated most recently in the question, since that’s the one I remember hearing better.”
Provided the objective is to maximize my money. There is no way to reason about it. So either of your example answers is fine. It is not more valid/invalid than any other answers.
Personally, I would just always guess a positive answer and forget about it. As it saves more energy. So “I am L”, and “I am LR” to your problems. If you think that is wrong I would like to know why.
Your answer based on expected value could maximize the total money of all copies. (Assuming everyone has the same objective and makes the same decision.) Maximizing the benefit of people similar to me (copies) at the expense of people different from me (the bet offerer) is an alternative objective. People might choose it due to natural feelings, after all, it is a beneficial evolution trait. That’s why this alternative objective seems attractive, especially when there is no valid strategy to maximize my benefit specifically. But as I have said, it does not involve self-locating probability.
You make a good point about the danger of alternate objectives creeping in if the original objective is unsatisfiable; this helps me see why my original thought experiment is not as useful as I’d hoped. What are your thoughts on this one? https://www.lesswrong.com/posts/heSbtt29bv5KRoyZa/the-first-person-perspective-is-not-a-random-sample?commentId=75ie9LnZgBEa66Kp8
No problem. I am actually very happy we can get some agreement. Which is not very often in discussions of anthropics.