In the scenario you describe, you know at the outset that there is only one copy of you.
Sort of yes, sort of no. For my formulation to work, different observer-moments have to be considered as separate; seeing one math problem represents the entire experience of being a particular person or knowing that a particular person exists. If I set the program up to shuffle list P like a deck of cards and let me go through the list one by one, and I look at 10 math problems, that’s equivalent to knowing that the world contains at least 10 unique individuals.
In other words, ‘I’ am not an individual in the world represented by W; the math problems are the individuals, and the possibility of there being many of them is already included.
(Is the fact that randomly generated simple math problems aren’t sentient a problem in some way?)
In other words, ‘I’ am not an individual in the world represented by W; the math problems are the individuals, and the possibility of there being many of them is already included.
Then ‘the observer’ in your scenario doesn’t correspond to anything that exists in the real world. After all, there is no epiphenomenal ‘passenger’ who chooses a person at random and watches events play out on the theatre of their mind.
Anthropic probabilities are meaningless without an epiphenomenal passenger. If p is “the probability of being person X” then what does “being person X” mean? Assuming X exists, the probability of X being X is 1. What about the probability of “me” being X? Well who am I? If I am X then the probability of me being X is 1. It’s only if I consider myself to be an epiphenomenal passenger who might have ridden along with one of many different people that it makes sense to assign a value other than 0 or 1 to the probability of ‘finding myself as X’.
To calculate anthropic probabilities requires some rules about how the passenger chooses who to ‘ride on’. Yet it’s impossible to state these rules without arbitrariness, in cases where there’s no right way to count up observers and draw their boundaries. I think the whole idea of anthropic reasoning is untenable.
I basically agree. This particular case (and perhaps others, though I haven’t checked) seems to be able to be formulated in non-anthropic terms, though. The observer not corresponding to anything in the real world shouldn’t be a problem, I expect; a fair 6-sided die should have a 1⁄6 chance of showing 1 when rolled even if nobody’s around to watch that happen.
What you’ve done is constructed an analogy that looks like this:
Generation of 10^(3W) math problems <---> Generation of 10^(3W) people
Funny set of rules A whereby an observer is assigned a problem <---> SSA
Funny set of rules B whereby an observer is assigned a problem <---> SIA
Probability that the observer is looking at problem X <---> Anthropic probability of being person X
But whereas “the probability that the observer is looking at problem X” depends on whether we arbitrarily choose rules A or B, the anthropic probability of being person X is supposed (by those who believe anthropic probabilities exist) to be a determinate matter. It’s not supposed to be a mere convention that we choose SSA or SIA, it’s supposed to be that one is ‘correct’ and the other ‘wrong’ (or both are wrong and something else is correct).
If we only consider non-anthropic problems then we can resolve everything satisfactorily by choosing ‘rules’ like A or B (and note that unless we add an observer and choose rules, there won’t be any questions to resolve) but that won’t tell us anything about SSA and SIA. (This is a clearer explanation than I gave in my first comment of what I think ‘doesn’t make sense’ about your approach.)
Sort of yes, sort of no. For my formulation to work, different observer-moments have to be considered as separate; seeing one math problem represents the entire experience of being a particular person or knowing that a particular person exists. If I set the program up to shuffle list P like a deck of cards and let me go through the list one by one, and I look at 10 math problems, that’s equivalent to knowing that the world contains at least 10 unique individuals.
In other words, ‘I’ am not an individual in the world represented by W; the math problems are the individuals, and the possibility of there being many of them is already included.
(Is the fact that randomly generated simple math problems aren’t sentient a problem in some way?)
Then ‘the observer’ in your scenario doesn’t correspond to anything that exists in the real world. After all, there is no epiphenomenal ‘passenger’ who chooses a person at random and watches events play out on the theatre of their mind.
Anthropic probabilities are meaningless without an epiphenomenal passenger. If p is “the probability of being person X” then what does “being person X” mean? Assuming X exists, the probability of X being X is 1. What about the probability of “me” being X? Well who am I? If I am X then the probability of me being X is 1. It’s only if I consider myself to be an epiphenomenal passenger who might have ridden along with one of many different people that it makes sense to assign a value other than 0 or 1 to the probability of ‘finding myself as X’.
To calculate anthropic probabilities requires some rules about how the passenger chooses who to ‘ride on’. Yet it’s impossible to state these rules without arbitrariness, in cases where there’s no right way to count up observers and draw their boundaries. I think the whole idea of anthropic reasoning is untenable.
I basically agree. This particular case (and perhaps others, though I haven’t checked) seems to be able to be formulated in non-anthropic terms, though. The observer not corresponding to anything in the real world shouldn’t be a problem, I expect; a fair 6-sided die should have a 1⁄6 chance of showing 1 when rolled even if nobody’s around to watch that happen.
What you’ve done is constructed an analogy that looks like this:
Generation of 10^(3W) math problems <---> Generation of 10^(3W) people
Funny set of rules A whereby an observer is assigned a problem <---> SSA
Funny set of rules B whereby an observer is assigned a problem <---> SIA
Probability that the observer is looking at problem X <---> Anthropic probability of being person X
But whereas “the probability that the observer is looking at problem X” depends on whether we arbitrarily choose rules A or B, the anthropic probability of being person X is supposed (by those who believe anthropic probabilities exist) to be a determinate matter. It’s not supposed to be a mere convention that we choose SSA or SIA, it’s supposed to be that one is ‘correct’ and the other ‘wrong’ (or both are wrong and something else is correct).
If we only consider non-anthropic problems then we can resolve everything satisfactorily by choosing ‘rules’ like A or B (and note that unless we add an observer and choose rules, there won’t be any questions to resolve) but that won’t tell us anything about SSA and SIA. (This is a clearer explanation than I gave in my first comment of what I think ‘doesn’t make sense’ about your approach.)
It makes sense to look at it that way, yes.
I do think that something like A or B should be able to accurately be said to be true of the world, though.