Not trying to put it in any negative way, but I honestly find the reply vague and hard to respond to. I get a general impression about what you are trying to say but feel I’m guessing. Do you disagree with me interpreting probability as relative frequencies in the disagreement example? Or do you think there has to be a defined cost/reward setup to make it a decision-making problem to talk about probabilities in anthropics? Or maybe something else?
Regarding different answers to different questions of the various instances of me. Again I’m not very sure what the argument is or how is it related to anthropics. Are you trying to say the disagreement on probability is due to different interpretations of the question? Also, I want to point out that not all anthropic problems are related to different instances of an observer. Take the Doomsday Argument, or the cloning experiment for example, the paradox is formed at the agent level, no special consideration of time/instances is needed.
More importantly “the probability of today being Monday”, or “the probability of this awakening being the first” do not exist.
Which I think is incorrect. They exist to the same extent that any probability exists: there are future experiences one can define (payouts, or resolutions of a wager) and it’s sensible to talk about the relative likelihood of those experiences.
I can relate to that. In fact, that is the most common criticism I have faced. After all, it is quite counter-intuitive.
I want to point to the paradox regarding the probability of me being a Boltzmann Brain. The probability of “this awakening being the first” is of the same format: the probability of an apparent indexical being a member of some default reference class. There is no experiment deciding which brain is me just as there is no experiment determining which day is today. There is no reason to apply a principle of indifference among the members of the default reference class. Yet that is essential to come up with a probability.
Of course one can define the experience. But I am not arguing “today is Monday” is a nonsensical statement, only there is no probability distribution. Yes, we can even wager on it. But we do not need probability to wager. Probability is however needed to come up with a betting strategy. Imagine you are the participant in the cloning-with-a-friend example who’s repeating the experiment a large number of times. You enter wagers about whether you are the original or clone after each wake-up. Now there exist a strategy to maximize the total gain of all participants or a strategy to maximize the average gain of all participants . (assuming all participants would act the same way as I do.) However, there is no strategy to simply maximize the gain of the self-apparent me. That is a huge red flag for me.
Of course one may argue there is no such strategy because of this beneficiary me is undefined. (it’s just an indexical after all). Then would it be consistent to say the related probability exists and well-defined?
But we do not need probability to wager. Probability is however needed to come up with a betting strategy
This may be near to a crux for me. Other than making decisions, what purpose does a probability distribution serve? Once we’ve agreed that probability is about an agent’s uncertainty rather than an objective fact of the universe, it reduces to “what is the proper betting strategy”, which combines probabilities with payoffs.
If you are a Boltzmann brain (or if you somehow update toward that conclusion), what will you do differently? Nothing, as such brains don’t actually act, they just exist momentarily and experience things.
Sorry for abandoning the discussion and reply so late. I think even if the sole purpose of probability is to guide decision making the problem remains about these self-location probabilities. In the cloning example, suppose we are giving a reward for participants’ every correct guess whether they are the original or clone. “The probability distribution of me being the original or the clone” doesn’t help us to make any decision. One may say these probabilities guide us to make decisions to maximize the overall benefit of all participants combined. However such decisions are guided by “the probability distribution of a randomly selected participant being the original or the clone” without the use of indexical. And this purposed use of self-locating probability is based on the assumption that I am a randomly selected observer among certain reference class. In effect, an unsupported assumption is added yet it doesn’t allow us to make any new decisions. From a decision-making point of view, the entire purpose of this assumption seems to be finding an use of these self-locating probabilities.
“The probability distribution of me being the original or the clone” would be useful to decision making if it guides us on how to maximize the benefit of me specifically as stated in the probability distribution. But such a strategy do not exist. If one holds the view that other than decision making probability serves no purpose, then he should have no problem accepting self-locating probabilities do not exist since they do not have any purpose.
What reward (and more importantly, what utility) does the predictor receive/lose for a correct/incorrect guess?
To the extent that “you” care about your clones, you should guess in ways that maximize the aggregate payout to all guessers. If you don’t, then guess to maximize the guesser’s payout even at the expense of clones (who will make the same guess, but be wrong more often).
Self-locating probabilities exist only to the extent that they influence how much utility the current decision-maker assigns to experiences of possibly-you entities.
Probability should not depend on the type of rewards. Of course, a complicated system of reward could cause decision making to deviate from simple probability concerns. But probability would not be affected. If it helps then consider a simple reward system that each correct answer is awarded one util. As a participant, you take part in the same toss and clone experiment every day. So when you wake up the following day you do not know if you are the same physical person the day before. So you guess again for the same reward. Let your utils be independent of possible clones. E.g. if for each correct guess you are rewarded with a coin then the cloning would apply to the coins in your pocket too. Such that my cumulative gain would only be affected by my past guesses.
Why the extent of care to other clones matter? My answer and other clones’ utils are causally independent. The other clone’s utility depends on his answer. If you are talking about the possible future fissions of me it is still unrelated. Since my decision now would affect the two equally.
Surely, if “the probability distribution of me being the original or the clone” exists then it would be simple to devise a guessing strategy to maximize my gains? But somehow this strategy is elusive. Instead, the purposed self-locating probability could only help to give strategies to maximize the collective (or average) utilities of all clones even though some are clearly not me as the probability states. And that is assuming all clones make exactly the same decision as I do. If everyone must make the same decision (so there is only one decision making) and only the collective utility is considered then how is it still guided by a probability about the indexical me? That decision could be derived from the probability distribution of a randomly selected participant. Assuming I am a randomly selected participant is entirely unsubstantiated, and unnecessary to decision making as it brings nothing to the table.
Not trying to put it in any negative way, but I honestly find the reply vague and hard to respond to. I get a general impression about what you are trying to say but feel I’m guessing. Do you disagree with me interpreting probability as relative frequencies in the disagreement example? Or do you think there has to be a defined cost/reward setup to make it a decision-making problem to talk about probabilities in anthropics? Or maybe something else?
Regarding different answers to different questions of the various instances of me. Again I’m not very sure what the argument is or how is it related to anthropics. Are you trying to say the disagreement on probability is due to different interpretations of the question? Also, I want to point out that not all anthropic problems are related to different instances of an observer. Take the Doomsday Argument, or the cloning experiment for example, the paradox is formed at the agent level, no special consideration of time/instances is needed.
I think I’m mostly reacting to:
Which I think is incorrect. They exist to the same extent that any probability exists: there are future experiences one can define (payouts, or resolutions of a wager) and it’s sensible to talk about the relative likelihood of those experiences.
I can relate to that. In fact, that is the most common criticism I have faced. After all, it is quite counter-intuitive.
I want to point to the paradox regarding the probability of me being a Boltzmann Brain. The probability of “this awakening being the first” is of the same format: the probability of an apparent indexical being a member of some default reference class. There is no experiment deciding which brain is me just as there is no experiment determining which day is today. There is no reason to apply a principle of indifference among the members of the default reference class. Yet that is essential to come up with a probability.
Of course one can define the experience. But I am not arguing “today is Monday” is a nonsensical statement, only there is no probability distribution. Yes, we can even wager on it. But we do not need probability to wager. Probability is however needed to come up with a betting strategy. Imagine you are the participant in the cloning-with-a-friend example who’s repeating the experiment a large number of times. You enter wagers about whether you are the original or clone after each wake-up. Now there exist a strategy to maximize the total gain of all participants or a strategy to maximize the average gain of all participants . (assuming all participants would act the same way as I do.) However, there is no strategy to simply maximize the gain of the self-apparent me. That is a huge red flag for me.
Of course one may argue there is no such strategy because of this beneficiary me is undefined. (it’s just an indexical after all). Then would it be consistent to say the related probability exists and well-defined?
This may be near to a crux for me. Other than making decisions, what purpose does a probability distribution serve? Once we’ve agreed that probability is about an agent’s uncertainty rather than an objective fact of the universe, it reduces to “what is the proper betting strategy”, which combines probabilities with payoffs.
If you are a Boltzmann brain (or if you somehow update toward that conclusion), what will you do differently? Nothing, as such brains don’t actually act, they just exist momentarily and experience things.
Sorry for abandoning the discussion and reply so late. I think even if the sole purpose of probability is to guide decision making the problem remains about these self-location probabilities. In the cloning example, suppose we are giving a reward for participants’ every correct guess whether they are the original or clone. “The probability distribution of me being the original or the clone” doesn’t help us to make any decision. One may say these probabilities guide us to make decisions to maximize the overall benefit of all participants combined. However such decisions are guided by “the probability distribution of a randomly selected participant being the original or the clone” without the use of indexical. And this purposed use of self-locating probability is based on the assumption that I am a randomly selected observer among certain reference class. In effect, an unsupported assumption is added yet it doesn’t allow us to make any new decisions. From a decision-making point of view, the entire purpose of this assumption seems to be finding an use of these self-locating probabilities.
“The probability distribution of me being the original or the clone” would be useful to decision making if it guides us on how to maximize the benefit of me specifically as stated in the probability distribution. But such a strategy do not exist. If one holds the view that other than decision making probability serves no purpose, then he should have no problem accepting self-locating probabilities do not exist since they do not have any purpose.
What reward (and more importantly, what utility) does the predictor receive/lose for a correct/incorrect guess?
To the extent that “you” care about your clones, you should guess in ways that maximize the aggregate payout to all guessers. If you don’t, then guess to maximize the guesser’s payout even at the expense of clones (who will make the same guess, but be wrong more often).
Self-locating probabilities exist only to the extent that they influence how much utility the current decision-maker assigns to experiences of possibly-you entities.
Probability should not depend on the type of rewards. Of course, a complicated system of reward could cause decision making to deviate from simple probability concerns. But probability would not be affected. If it helps then consider a simple reward system that each correct answer is awarded one util. As a participant, you take part in the same toss and clone experiment every day. So when you wake up the following day you do not know if you are the same physical person the day before. So you guess again for the same reward. Let your utils be independent of possible clones. E.g. if for each correct guess you are rewarded with a coin then the cloning would apply to the coins in your pocket too. Such that my cumulative gain would only be affected by my past guesses.
Why the extent of care to other clones matter? My answer and other clones’ utils are causally independent. The other clone’s utility depends on his answer. If you are talking about the possible future fissions of me it is still unrelated. Since my decision now would affect the two equally.
Surely, if “the probability distribution of me being the original or the clone” exists then it would be simple to devise a guessing strategy to maximize my gains? But somehow this strategy is elusive. Instead, the purposed self-locating probability could only help to give strategies to maximize the collective (or average) utilities of all clones even though some are clearly not me as the probability states. And that is assuming all clones make exactly the same decision as I do. If everyone must make the same decision (so there is only one decision making) and only the collective utility is considered then how is it still guided by a probability about the indexical me? That decision could be derived from the probability distribution of a randomly selected participant. Assuming I am a randomly selected participant is entirely unsubstantiated, and unnecessary to decision making as it brings nothing to the table.