Probability should not depend on the type of rewards. Of course, a complicated system of reward could cause decision making to deviate from simple probability concerns. But probability would not be affected. If it helps then consider a simple reward system that each correct answer is awarded one util. As a participant, you take part in the same toss and clone experiment every day. So when you wake up the following day you do not know if you are the same physical person the day before. So you guess again for the same reward. Let your utils be independent of possible clones. E.g. if for each correct guess you are rewarded with a coin then the cloning would apply to the coins in your pocket too. Such that my cumulative gain would only be affected by my past guesses.
Why the extent of care to other clones matter? My answer and other clones’ utils are causally independent. The other clone’s utility depends on his answer. If you are talking about the possible future fissions of me it is still unrelated. Since my decision now would affect the two equally.
Surely, if “the probability distribution of me being the original or the clone” exists then it would be simple to devise a guessing strategy to maximize my gains? But somehow this strategy is elusive. Instead, the purposed self-locating probability could only help to give strategies to maximize the collective (or average) utilities of all clones even though some are clearly not me as the probability states. And that is assuming all clones make exactly the same decision as I do. If everyone must make the same decision (so there is only one decision making) and only the collective utility is considered then how is it still guided by a probability about the indexical me? That decision could be derived from the probability distribution of a randomly selected participant. Assuming I am a randomly selected participant is entirely unsubstantiated, and unnecessary to decision making as it brings nothing to the table.
Probability should not depend on the type of rewards. Of course, a complicated system of reward could cause decision making to deviate from simple probability concerns. But probability would not be affected. If it helps then consider a simple reward system that each correct answer is awarded one util. As a participant, you take part in the same toss and clone experiment every day. So when you wake up the following day you do not know if you are the same physical person the day before. So you guess again for the same reward. Let your utils be independent of possible clones. E.g. if for each correct guess you are rewarded with a coin then the cloning would apply to the coins in your pocket too. Such that my cumulative gain would only be affected by my past guesses.
Why the extent of care to other clones matter? My answer and other clones’ utils are causally independent. The other clone’s utility depends on his answer. If you are talking about the possible future fissions of me it is still unrelated. Since my decision now would affect the two equally.
Surely, if “the probability distribution of me being the original or the clone” exists then it would be simple to devise a guessing strategy to maximize my gains? But somehow this strategy is elusive. Instead, the purposed self-locating probability could only help to give strategies to maximize the collective (or average) utilities of all clones even though some are clearly not me as the probability states. And that is assuming all clones make exactly the same decision as I do. If everyone must make the same decision (so there is only one decision making) and only the collective utility is considered then how is it still guided by a probability about the indexical me? That decision could be derived from the probability distribution of a randomly selected participant. Assuming I am a randomly selected participant is entirely unsubstantiated, and unnecessary to decision making as it brings nothing to the table.