Say you are about to flip a quantum coin, and the coin has an equal probability of coming up heads or tails.
If it comes up heads, a machine creates 1,000 simulations of you before flipping the coin (with the same subjective experience so that you cannot tell if you yourself is a simulation) and gives all of these simulations a lollipop each after they flip the coin.
If it comes up tails, you get nothing.
Now, before you flip the coin, what is the probability that you will receive a lollipop?
There are multiple consistent answers here, because there are multiple ways to map measures onto events in ways that obey the probability axioms and are consistent with some reasonable meanings of the words in English. There are many points of ambiguity!
Most importantly, what exactly does the term you refer to in every instance where it is used in this scenario description?
Does it always refer to the same entity? For example, if the term “you” always means a physical entity, then the probability is zero because in this scenario no physical entity ever receives a lollipop. (There are some simulated entities that falsely believe they are you, who experience receiving a lollipop, but they’re irrelevant)
Maybe it refers to any entity in an epistemic state immediately prior to flipping the coin in such a scenario? Then there may be 1 of these, 1001, or any other number depending upon how you count and what the rest of the universe or multiverse contains. For example, there are infinitely many Turing machines that simulate an entity in this epistemic state having subsequent experiences. I would expect that most of them by some complexity metric do not simulate the entity subsequently experiencing receipt of a lollipop. Should they be included in the probability calculation?
If the “you” can include entities being simulated, can the term “coin” include simulated coins? How are we to interpret “the coin has an equal probability of coming up heads or tails”? Is this true for simulated coins? Are they independent of one another and of whichever “you” is being discussed (some of which might not have any corresponding coins)?
So in short, what sample space are you using? Does it satisfy any particular symmetry properties that we can use to fill in the blanks “nicely”? Note that you can’t just have all the nice properties, since some are inconsistent with the others.
I’m gonna be lazy and say:
If that ^ is a given premise in this hypothetical, then we know for certain it is not a simulation (because in a simulation, after tails, you’d get something). Therefore the probability of receiving a lollipop here is 0 (unless you receive one for a completely unrelated reason)
Sorry but I think you may have misunderstood the question since your answer doesn’t make any sense to me. The main problem I was puzzled about was whether or not the odds of getting a lollipop are 1:1 (as is the probability of the fair coin coming up heads) or 1001:1 (whether or not the simulations affect the self-location uncertainty). As shiminux said it is similar to the sleeping beauty problem where self-location uncertainty is at play.