It then chooses a random math problem from P and presents it to me, without telling me what the problem number is for that particular math problem.
In the scenario you describe, you know at the outset that there is only one copy of you. To be able to apply anthropic assumptions like SIA and SSA, you would need to amend the scenario so that there are multiple ‘copies’ of you.
Rather than generating 10^(3W) random simple math problems and having a random one shown to you, say you arrange for 10^(3W) copies of yourself to be created. And then let each copy be shown a different math problem.
Then SSA says that, upon finding yourself looking at a math problem, you learn nothing at all about W, whereas SIA says you need to multiply your prior odds by 1:1000:1000000.
An interesting variation to consider is where 10^6 copies of you are created, and then 10^(3W) of them are chosen at random to be shown a math problem. Then to be able to apply SSA, you need to decide whether to regard yourself as (i) a random person or (ii) a random person-who-received-a-math-problem. If (i) then SSA and SIA will both recommend updating your odds as above. If (ii) then SSA says you learn nothing whereas SIA recommends that you update your odds. (SSA cares about the reference class whereas SIA doesn’t.)
Perhaps the purpose of your ‘model’ of SIA is precisely to find a way of understanding it without bringing in multiple observers or ‘copies’. To be honest, I don’t think this makes much sense (like trying to explain relativity without reference to space or time (or spacetime)).
In the scenario you describe, you know at the outset that there is only one copy of you.
Sort of yes, sort of no. For my formulation to work, different observer-moments have to be considered as separate; seeing one math problem represents the entire experience of being a particular person or knowing that a particular person exists. If I set the program up to shuffle list P like a deck of cards and let me go through the list one by one, and I look at 10 math problems, that’s equivalent to knowing that the world contains at least 10 unique individuals.
In other words, ‘I’ am not an individual in the world represented by W; the math problems are the individuals, and the possibility of there being many of them is already included.
(Is the fact that randomly generated simple math problems aren’t sentient a problem in some way?)
In other words, ‘I’ am not an individual in the world represented by W; the math problems are the individuals, and the possibility of there being many of them is already included.
Then ‘the observer’ in your scenario doesn’t correspond to anything that exists in the real world. After all, there is no epiphenomenal ‘passenger’ who chooses a person at random and watches events play out on the theatre of their mind.
Anthropic probabilities are meaningless without an epiphenomenal passenger. If p is “the probability of being person X” then what does “being person X” mean? Assuming X exists, the probability of X being X is 1. What about the probability of “me” being X? Well who am I? If I am X then the probability of me being X is 1. It’s only if I consider myself to be an epiphenomenal passenger who might have ridden along with one of many different people that it makes sense to assign a value other than 0 or 1 to the probability of ‘finding myself as X’.
To calculate anthropic probabilities requires some rules about how the passenger chooses who to ‘ride on’. Yet it’s impossible to state these rules without arbitrariness, in cases where there’s no right way to count up observers and draw their boundaries. I think the whole idea of anthropic reasoning is untenable.
I basically agree. This particular case (and perhaps others, though I haven’t checked) seems to be able to be formulated in non-anthropic terms, though. The observer not corresponding to anything in the real world shouldn’t be a problem, I expect; a fair 6-sided die should have a 1⁄6 chance of showing 1 when rolled even if nobody’s around to watch that happen.
What you’ve done is constructed an analogy that looks like this:
Generation of 10^(3W) math problems <---> Generation of 10^(3W) people
Funny set of rules A whereby an observer is assigned a problem <---> SSA
Funny set of rules B whereby an observer is assigned a problem <---> SIA
Probability that the observer is looking at problem X <---> Anthropic probability of being person X
But whereas “the probability that the observer is looking at problem X” depends on whether we arbitrarily choose rules A or B, the anthropic probability of being person X is supposed (by those who believe anthropic probabilities exist) to be a determinate matter. It’s not supposed to be a mere convention that we choose SSA or SIA, it’s supposed to be that one is ‘correct’ and the other ‘wrong’ (or both are wrong and something else is correct).
If we only consider non-anthropic problems then we can resolve everything satisfactorily by choosing ‘rules’ like A or B (and note that unless we add an observer and choose rules, there won’t be any questions to resolve) but that won’t tell us anything about SSA and SIA. (This is a clearer explanation than I gave in my first comment of what I think ‘doesn’t make sense’ about your approach.)
In the scenario you describe, you know at the outset that there is only one copy of you. To be able to apply anthropic assumptions like SIA and SSA, you would need to amend the scenario so that there are multiple ‘copies’ of you.
Rather than generating 10^(3W) random simple math problems and having a random one shown to you, say you arrange for 10^(3W) copies of yourself to be created. And then let each copy be shown a different math problem.
Then SSA says that, upon finding yourself looking at a math problem, you learn nothing at all about W, whereas SIA says you need to multiply your prior odds by 1:1000:1000000.
An interesting variation to consider is where 10^6 copies of you are created, and then 10^(3W) of them are chosen at random to be shown a math problem. Then to be able to apply SSA, you need to decide whether to regard yourself as (i) a random person or (ii) a random person-who-received-a-math-problem. If (i) then SSA and SIA will both recommend updating your odds as above. If (ii) then SSA says you learn nothing whereas SIA recommends that you update your odds. (SSA cares about the reference class whereas SIA doesn’t.)
Perhaps the purpose of your ‘model’ of SIA is precisely to find a way of understanding it without bringing in multiple observers or ‘copies’. To be honest, I don’t think this makes much sense (like trying to explain relativity without reference to space or time (or spacetime)).
Sort of yes, sort of no. For my formulation to work, different observer-moments have to be considered as separate; seeing one math problem represents the entire experience of being a particular person or knowing that a particular person exists. If I set the program up to shuffle list P like a deck of cards and let me go through the list one by one, and I look at 10 math problems, that’s equivalent to knowing that the world contains at least 10 unique individuals.
In other words, ‘I’ am not an individual in the world represented by W; the math problems are the individuals, and the possibility of there being many of them is already included.
(Is the fact that randomly generated simple math problems aren’t sentient a problem in some way?)
Then ‘the observer’ in your scenario doesn’t correspond to anything that exists in the real world. After all, there is no epiphenomenal ‘passenger’ who chooses a person at random and watches events play out on the theatre of their mind.
Anthropic probabilities are meaningless without an epiphenomenal passenger. If p is “the probability of being person X” then what does “being person X” mean? Assuming X exists, the probability of X being X is 1. What about the probability of “me” being X? Well who am I? If I am X then the probability of me being X is 1. It’s only if I consider myself to be an epiphenomenal passenger who might have ridden along with one of many different people that it makes sense to assign a value other than 0 or 1 to the probability of ‘finding myself as X’.
To calculate anthropic probabilities requires some rules about how the passenger chooses who to ‘ride on’. Yet it’s impossible to state these rules without arbitrariness, in cases where there’s no right way to count up observers and draw their boundaries. I think the whole idea of anthropic reasoning is untenable.
I basically agree. This particular case (and perhaps others, though I haven’t checked) seems to be able to be formulated in non-anthropic terms, though. The observer not corresponding to anything in the real world shouldn’t be a problem, I expect; a fair 6-sided die should have a 1⁄6 chance of showing 1 when rolled even if nobody’s around to watch that happen.
What you’ve done is constructed an analogy that looks like this:
Generation of 10^(3W) math problems <---> Generation of 10^(3W) people
Funny set of rules A whereby an observer is assigned a problem <---> SSA
Funny set of rules B whereby an observer is assigned a problem <---> SIA
Probability that the observer is looking at problem X <---> Anthropic probability of being person X
But whereas “the probability that the observer is looking at problem X” depends on whether we arbitrarily choose rules A or B, the anthropic probability of being person X is supposed (by those who believe anthropic probabilities exist) to be a determinate matter. It’s not supposed to be a mere convention that we choose SSA or SIA, it’s supposed to be that one is ‘correct’ and the other ‘wrong’ (or both are wrong and something else is correct).
If we only consider non-anthropic problems then we can resolve everything satisfactorily by choosing ‘rules’ like A or B (and note that unless we add an observer and choose rules, there won’t be any questions to resolve) but that won’t tell us anything about SSA and SIA. (This is a clearer explanation than I gave in my first comment of what I think ‘doesn’t make sense’ about your approach.)
It makes sense to look at it that way, yes.
I do think that something like A or B should be able to accurately be said to be true of the world, though.