I decided to move some of my thoughts to the comments to keep the OP short.
Comparison to the Dr Evil Problem
This is very similar to Dr Evil, but there is one Firstly, the action(s) taken by the genie depends what the genie predicts that you or your clones will do, instead of what is actually chosen. Secondly, if the genie predicts that you will choose to be pelted with eggs, your clones will never exist.
I previously argued for the Dr Evil problem that you ought to take the blackmail threat seriously as a regular blackmail threat as the creation of clones lowered the probability (which was never actually 100%) that you are the real Dr Evil. What is confusing here is the your reference class changes according to your decision. If you choose to be pelted with eggs, you know that you are not a clone with 100% probability. If you choose to be granted the perfect life, then you have a 1,000,000⁄1,000,001 chance of being a clone. This is a problem as we need to know who were are optimising for before we can say what the optimal solution is. Compare this to the Dr Evil problem where the chance of being a clone is independent of your choice.
Comparison to the Tropical Paradise Problem
In the Bayesian Conudrum, if you decide not to create any clones you are indifferent about your decision as you know you were the original who was always destined to freeze. On the other hand, immediately after you choose to create clones, you are thankful that you did as it makes it highly likely that you are in fact a clone. Again this is an issue with differing reference classes.
Motivation
We’ve seen that “all observers who experience state X” can be ambiguous unless we’ve already fixed the decision. My motivation is to deal with problems like Parfit’s Hitchhiker. When the predictor is perfect, paying always results in a better outcome so the set of observers whose outcomes are optimised doesn’t matter. However, suppose we roll a 100-sided die and there is one world for each possible result. In the world where the die is 100, then the driver always predicts that you’ll pay, while the other 99% of the time he is a perfect predictor. Who is referred to by, “all observers who experience being in town”? If you pay, then this includes the version of you in all 100 worlds, while if you never pay, this only includes the version of you in world 100. Once we decide what set we should be optimising for, answering the problem is easy (so long as you accept Timeless Decision Theory’s argument that predictors are subjunctively linked to your decision).
Possible Answers
Note: The content below has now been developed into a post here. I would suggest reading the post instead of the rest of the comment.
Suppose an agent A in experiential state S faces a decision D. The three most obvious ways of evaluating these kinds of decisions in my mind are as follows:
1) If C is a choice, calculate the expected utility of C by averaging over all agents who experience S when A chooses C
2) If C and D are choices, compare these pairwise by calculating the average over all agents who experience S when A chooses either C or D (non-existence is treated as a 0).
3) If C is a choice, calculate the expected utility of C by averaging over all agents who experience S given any choice. Again, non-existence is treated as a 0
Unfortunately, there are arguments against each of these possibilities.
Against 1) - No Such Agents
In the imperfect Parfit’s Hitchhiker, 1) defects, while 2) co-operates. I would suggest that 1) is slightly less plausible than 2), since the former runs into difficulties with the perfect Parfit’s Hitchhiker. No agents experience being in town when defect is chosen, but this option is clearly worse. One response to this empty reference class would be to set the expected utility to 0 in this case, but this would result in us dying in a desert. Another suggestion would be that we invalidate any option where there is no agent who experiences S when A chooses C, however, in the Retro Blackmail scenario with a perfect predictor we want to refuse to pay precisely so that we don’t end up with a blackmail letter. So this doesn’t seem to work either.
Against 2) - Magic Potion
Suppose you have the ability to cast one of two spells:
Happiness Spell: +100 utility for you
Oneness Spells: Makes everyone in the world temporarily adopt your decision making algorithms and experience the same experiences that you are feeling in this situation, then provides them +1 utility when it wears off if they choose to cast the Oneness Spell.
2) suggests casting the Oneness Spell, but you only have a reason to choose the Oneness Spell if you think you would choose the Oneness spell. However, the same also holds for the Happiness Spell. Both spells are the best on average for the people who choose those spells, but these are different groups on people.
Against 3) - Irrelevant Considerations
Suppose we defend 3). We can imagine adding in an irrelevant decision Z that expands the reference class to cover all individuals as follows. Firstly, if it is predicted that you will take option Z, everyone’s minds are temporarily overwritten so that they are effectively clones of you facing the problem under discussion. Secondly, Option Z causes everyone who chooses this option lose a large amount of utility so no-one should ever take it. But according to this criteria, it would expand the reference class used even when comparing choice C to choice D. It doesn’t seem that a decision that is not taken should be able to do this.
I decided to move some of my thoughts to the comments to keep the OP short.
Comparison to the Dr Evil Problem
This is very similar to Dr Evil, but there is one Firstly, the action(s) taken by the genie depends what the genie predicts that you or your clones will do, instead of what is actually chosen. Secondly, if the genie predicts that you will choose to be pelted with eggs, your clones will never exist.
I previously argued for the Dr Evil problem that you ought to take the blackmail threat seriously as a regular blackmail threat as the creation of clones lowered the probability (which was never actually 100%) that you are the real Dr Evil. What is confusing here is the your reference class changes according to your decision. If you choose to be pelted with eggs, you know that you are not a clone with 100% probability. If you choose to be granted the perfect life, then you have a 1,000,000⁄1,000,001 chance of being a clone. This is a problem as we need to know who were are optimising for before we can say what the optimal solution is. Compare this to the Dr Evil problem where the chance of being a clone is independent of your choice.
Comparison to the Tropical Paradise Problem
In the Bayesian Conudrum, if you decide not to create any clones you are indifferent about your decision as you know you were the original who was always destined to freeze. On the other hand, immediately after you choose to create clones, you are thankful that you did as it makes it highly likely that you are in fact a clone. Again this is an issue with differing reference classes.
Motivation
We’ve seen that “all observers who experience state X” can be ambiguous unless we’ve already fixed the decision. My motivation is to deal with problems like Parfit’s Hitchhiker. When the predictor is perfect, paying always results in a better outcome so the set of observers whose outcomes are optimised doesn’t matter. However, suppose we roll a 100-sided die and there is one world for each possible result. In the world where the die is 100, then the driver always predicts that you’ll pay, while the other 99% of the time he is a perfect predictor. Who is referred to by, “all observers who experience being in town”? If you pay, then this includes the version of you in all 100 worlds, while if you never pay, this only includes the version of you in world 100. Once we decide what set we should be optimising for, answering the problem is easy (so long as you accept Timeless Decision Theory’s argument that predictors are subjunctively linked to your decision).
Possible Answers
Note: The content below has now been developed into a post here. I would suggest reading the post instead of the rest of the comment.
Suppose an agent A in experiential state S faces a decision D. The three most obvious ways of evaluating these kinds of decisions in my mind are as follows:
1) If C is a choice, calculate the expected utility of C by averaging over all agents who experience S when A chooses C
2) If C and D are choices, compare these pairwise by calculating the average over all agents who experience S when A chooses either C or D (non-existence is treated as a 0).
3) If C is a choice, calculate the expected utility of C by averaging over all agents who experience S given any choice. Again, non-existence is treated as a 0
Unfortunately, there are arguments against each of these possibilities.
Against 1) - No Such Agents
In the imperfect Parfit’s Hitchhiker, 1) defects, while 2) co-operates. I would suggest that 1) is slightly less plausible than 2), since the former runs into difficulties with the perfect Parfit’s Hitchhiker. No agents experience being in town when defect is chosen, but this option is clearly worse. One response to this empty reference class would be to set the expected utility to 0 in this case, but this would result in us dying in a desert. Another suggestion would be that we invalidate any option where there is no agent who experiences S when A chooses C, however, in the Retro Blackmail scenario with a perfect predictor we want to refuse to pay precisely so that we don’t end up with a blackmail letter. So this doesn’t seem to work either.
Against 2) - Magic Potion
Suppose you have the ability to cast one of two spells:
Happiness Spell: +100 utility for you
Oneness Spells: Makes everyone in the world temporarily adopt your decision making algorithms and experience the same experiences that you are feeling in this situation, then provides them +1 utility when it wears off if they choose to cast the Oneness Spell.
2) suggests casting the Oneness Spell, but you only have a reason to choose the Oneness Spell if you think you would choose the Oneness spell. However, the same also holds for the Happiness Spell. Both spells are the best on average for the people who choose those spells, but these are different groups on people.
Against 3) - Irrelevant Considerations
Suppose we defend 3). We can imagine adding in an irrelevant decision Z that expands the reference class to cover all individuals as follows. Firstly, if it is predicted that you will take option Z, everyone’s minds are temporarily overwritten so that they are effectively clones of you facing the problem under discussion. Secondly, Option Z causes everyone who chooses this option lose a large amount of utility so no-one should ever take it. But according to this criteria, it would expand the reference class used even when comparing choice C to choice D. It doesn’t seem that a decision that is not taken should be able to do this.