If that ^ is a given premise in this hypothetical, then we know for certain it is not a simulation (because in a simulation, after tails, you’d get something). Therefore the probability of receiving a lollipop here is 0 (unless you receive one for a completely unrelated reason)
Sorry but I think you may have misunderstood the question since your answer doesn’t make any sense to me. The main problem I was puzzled about was whether or not the odds of getting a lollipop are 1:1 (as is the probability of the fair coin coming up heads) or 1001:1 (whether or not the simulations affect the self-location uncertainty). As shiminux said it is similar to the sleeping beauty problem where self-location uncertainty is at play.
I’m gonna be lazy and say:
If that ^ is a given premise in this hypothetical, then we know for certain it is not a simulation (because in a simulation, after tails, you’d get something). Therefore the probability of receiving a lollipop here is 0 (unless you receive one for a completely unrelated reason)
Sorry but I think you may have misunderstood the question since your answer doesn’t make any sense to me. The main problem I was puzzled about was whether or not the odds of getting a lollipop are 1:1 (as is the probability of the fair coin coming up heads) or 1001:1 (whether or not the simulations affect the self-location uncertainty). As shiminux said it is similar to the sleeping beauty problem where self-location uncertainty is at play.