Sorry for abandoning the discussion and reply so late. I think even if the sole purpose of probability is to guide decision making the problem remains about these self-location probabilities. In the cloning example, suppose we are giving a reward for participants’ every correct guess whether they are the original or clone. “The probability distribution of me being the original or the clone” doesn’t help us to make any decision. One may say these probabilities guide us to make decisions to maximize the overall benefit of all participants combined. However such decisions are guided by “the probability distribution of a randomly selected participant being the original or the clone” without the use of indexical. And this purposed use of self-locating probability is based on the assumption that I am a randomly selected observer among certain reference class. In effect, an unsupported assumption is added yet it doesn’t allow us to make any new decisions. From a decision-making point of view, the entire purpose of this assumption seems to be finding an use of these self-locating probabilities.
“The probability distribution of me being the original or the clone” would be useful to decision making if it guides us on how to maximize the benefit of me specifically as stated in the probability distribution. But such a strategy do not exist. If one holds the view that other than decision making probability serves no purpose, then he should have no problem accepting self-locating probabilities do not exist since they do not have any purpose.
What reward (and more importantly, what utility) does the predictor receive/lose for a correct/incorrect guess?
To the extent that “you” care about your clones, you should guess in ways that maximize the aggregate payout to all guessers. If you don’t, then guess to maximize the guesser’s payout even at the expense of clones (who will make the same guess, but be wrong more often).
Self-locating probabilities exist only to the extent that they influence how much utility the current decision-maker assigns to experiences of possibly-you entities.
Probability should not depend on the type of rewards. Of course, a complicated system of reward could cause decision making to deviate from simple probability concerns. But probability would not be affected. If it helps then consider a simple reward system that each correct answer is awarded one util. As a participant, you take part in the same toss and clone experiment every day. So when you wake up the following day you do not know if you are the same physical person the day before. So you guess again for the same reward. Let your utils be independent of possible clones. E.g. if for each correct guess you are rewarded with a coin then the cloning would apply to the coins in your pocket too. Such that my cumulative gain would only be affected by my past guesses.
Why the extent of care to other clones matter? My answer and other clones’ utils are causally independent. The other clone’s utility depends on his answer. If you are talking about the possible future fissions of me it is still unrelated. Since my decision now would affect the two equally.
Surely, if “the probability distribution of me being the original or the clone” exists then it would be simple to devise a guessing strategy to maximize my gains? But somehow this strategy is elusive. Instead, the purposed self-locating probability could only help to give strategies to maximize the collective (or average) utilities of all clones even though some are clearly not me as the probability states. And that is assuming all clones make exactly the same decision as I do. If everyone must make the same decision (so there is only one decision making) and only the collective utility is considered then how is it still guided by a probability about the indexical me? That decision could be derived from the probability distribution of a randomly selected participant. Assuming I am a randomly selected participant is entirely unsubstantiated, and unnecessary to decision making as it brings nothing to the table.
Sorry for abandoning the discussion and reply so late. I think even if the sole purpose of probability is to guide decision making the problem remains about these self-location probabilities. In the cloning example, suppose we are giving a reward for participants’ every correct guess whether they are the original or clone. “The probability distribution of me being the original or the clone” doesn’t help us to make any decision. One may say these probabilities guide us to make decisions to maximize the overall benefit of all participants combined. However such decisions are guided by “the probability distribution of a randomly selected participant being the original or the clone” without the use of indexical. And this purposed use of self-locating probability is based on the assumption that I am a randomly selected observer among certain reference class. In effect, an unsupported assumption is added yet it doesn’t allow us to make any new decisions. From a decision-making point of view, the entire purpose of this assumption seems to be finding an use of these self-locating probabilities.
“The probability distribution of me being the original or the clone” would be useful to decision making if it guides us on how to maximize the benefit of me specifically as stated in the probability distribution. But such a strategy do not exist. If one holds the view that other than decision making probability serves no purpose, then he should have no problem accepting self-locating probabilities do not exist since they do not have any purpose.
What reward (and more importantly, what utility) does the predictor receive/lose for a correct/incorrect guess?
To the extent that “you” care about your clones, you should guess in ways that maximize the aggregate payout to all guessers. If you don’t, then guess to maximize the guesser’s payout even at the expense of clones (who will make the same guess, but be wrong more often).
Self-locating probabilities exist only to the extent that they influence how much utility the current decision-maker assigns to experiences of possibly-you entities.
Probability should not depend on the type of rewards. Of course, a complicated system of reward could cause decision making to deviate from simple probability concerns. But probability would not be affected. If it helps then consider a simple reward system that each correct answer is awarded one util. As a participant, you take part in the same toss and clone experiment every day. So when you wake up the following day you do not know if you are the same physical person the day before. So you guess again for the same reward. Let your utils be independent of possible clones. E.g. if for each correct guess you are rewarded with a coin then the cloning would apply to the coins in your pocket too. Such that my cumulative gain would only be affected by my past guesses.
Why the extent of care to other clones matter? My answer and other clones’ utils are causally independent. The other clone’s utility depends on his answer. If you are talking about the possible future fissions of me it is still unrelated. Since my decision now would affect the two equally.
Surely, if “the probability distribution of me being the original or the clone” exists then it would be simple to devise a guessing strategy to maximize my gains? But somehow this strategy is elusive. Instead, the purposed self-locating probability could only help to give strategies to maximize the collective (or average) utilities of all clones even though some are clearly not me as the probability states. And that is assuming all clones make exactly the same decision as I do. If everyone must make the same decision (so there is only one decision making) and only the collective utility is considered then how is it still guided by a probability about the indexical me? That decision could be derived from the probability distribution of a randomly selected participant. Assuming I am a randomly selected participant is entirely unsubstantiated, and unnecessary to decision making as it brings nothing to the table.