Let’s look at examples where we know the ‘right’ answer:
Someone flips a coin. If it’s heads they copy you a thousand times and put 1 of you in a green room and 999 of you in a red room. If it’s tails they do the opposite.
You wake up in a green room and conclude that the coin was likely tails.
Now assume that in addition to copying you 1000 times, 999 of you were randomly selected to have the part of your brain that remembers to apply anthropic reasoning erased. You wake up in a green room and remember to apply the anthropic principle, but, knowing that you conclude that the group of people like you is only you. Nonetheless you should (I intuitively feel) still conclude the coin was likely tails.
Now assume that instead of random memory erasure, if the coin was heads the people in the red room forget about anthropics, and if the coin was tails the people in the green room forget about anthropics. You wake up in a green room and remember to apply the anthropic principle. Now it matters that you know to use the anthropic principle, and you should conclude with 100% certainty that the coin came up heads.
So, sometimes we need to consider the fact that the other people can apply the anthropic principle, and sometimes we don’t need to consider it. I think I’ve confused myself.
Nonetheless you should (I intuitively feel) still conclude the coin was likely tails.
I think your intuitions lead you astray at exactly this point.
Suppose that the 1000 of you are randomly ‘tagged’ with distinct id numbers from the set {1,...,1000}, and that a clone learns its id number upon waking. Suppose you wake in a green room and see id number 707.
If all the clones remember to apply anthropic reasoning (assuming for argument’s sake that my current line of reasoning is ‘anthropic’) then you can easily work out that the probability of the observed event “number 707 is an anthropic reasoner in a green room” is 1/1000 if coin was heads or 999/1000 if coin was tails.
However, if 998 clones have their ‘anthropic reasoning’ capacity removed then both probabilities are 1/1000, and you should conclude that heads and tails are equally likely.
However, if 999 clones have their ‘anthropic reasoning’ capacity removed then both probabilities are 1/1001, and you should conclude that heads and tails are equally likely.
Are you sure? In the earlier model where memory erasure is random, remembering AR will be an independent event from the room placements and won’t tell you anything extra about that.
(Note: I got the numbers slightly wrong—the 1001s should have been 1000s etc.)
Yes: If the coin was heads then the probability of event “clone #707 is in a green room” is 1/1000. And since, in this case, the clone in the green room is sure to be an anthropic reasoner, the probability of “clone #707 is an anthropic reasoner in a green room” is still 1/1000.
On the other hand, if the coin was tails then the probability of “clone #707 is in a green room” is 999/1000. However, clone #707 also knows that “clone #707 is an AR”, and P(#707 is AR | coin was tails and #707 is in a green room) is only 1⁄999.
Therefore, P(#707 is an AR in a green room | coin was tails) is (999/1000) * (1/999) = 1/1000.
If the coin was heads then the probability of event “clone #707 is in a green room” is 1/1000. And since, in this case, the clone in the green room is sure to be an anthropic reasoner, the probability of “clone #707 is an anthropic reasoner in a green room” is still 1/1000.
But you know that you are AR in the exact same way that you know that you are in a green room. If you’re taking P(BeingInGreenRoom|CoinIsHead)=1/1000, then you must equally take P(AR)=P(AR|CoinIsHead)=P(AR|BeingInGreenRoom)=1/1000.
and P(#707 is AR | coin was tails and #707 is in a green room) is only 1⁄999.
Why shouldn’t it be 1/1000? The lucky clone who gets to retain AR is picked at random among the entire thousand, not just the ones in the more common type of room.
I like this example because it has nice tidy prior probabilities. That’s very much lacking in the Doomsday Argument—how do you distribute a prior over a value that has no obvious upper bound? For any finite number of people that will ever live, is there much greater than zero prior probability of that being the number? Even if I can identify something truly special about the reference class “among the first 100 billion people” as opposed to any other mathematically definable group—and thus push down the posterior probabilities of very large numbers of people eventually living—it doesn’t seem to push down very far.
Let’s look at examples where we know the ‘right’ answer:
Someone flips a coin. If it’s heads they copy you a thousand times and put 1 of you in a green room and 999 of you in a red room. If it’s tails they do the opposite.
You wake up in a green room and conclude that the coin was likely tails.
Now assume that in addition to copying you 1000 times, 999 of you were randomly selected to have the part of your brain that remembers to apply anthropic reasoning erased. You wake up in a green room and remember to apply the anthropic principle, but, knowing that you conclude that the group of people like you is only you. Nonetheless you should (I intuitively feel) still conclude the coin was likely tails.
Now assume that instead of random memory erasure, if the coin was heads the people in the red room forget about anthropics, and if the coin was tails the people in the green room forget about anthropics. You wake up in a green room and remember to apply the anthropic principle. Now it matters that you know to use the anthropic principle, and you should conclude with 100% certainty that the coin came up heads.
So, sometimes we need to consider the fact that the other people can apply the anthropic principle, and sometimes we don’t need to consider it. I think I’ve confused myself.
I think your intuitions lead you astray at exactly this point.
Suppose that the 1000 of you are randomly ‘tagged’ with distinct id numbers from the set {1,...,1000}, and that a clone learns its id number upon waking. Suppose you wake in a green room and see id number 707.
If all the clones remember to apply anthropic reasoning (assuming for argument’s sake that my current line of reasoning is ‘anthropic’) then you can easily work out that the probability of the observed event “number 707 is an anthropic reasoner in a green room” is 1/1000 if coin was heads or 999/1000 if coin was tails.
However, if 998 clones have their ‘anthropic reasoning’ capacity removed then both probabilities are 1/1000, and you should conclude that heads and tails are equally likely.
Are you sure? In the earlier model where memory erasure is random, remembering AR will be an independent event from the room placements and won’t tell you anything extra about that.
(Note: I got the numbers slightly wrong—the 1001s should have been 1000s etc.)
Yes: If the coin was heads then the probability of event “clone #707 is in a green room” is 1/1000. And since, in this case, the clone in the green room is sure to be an anthropic reasoner, the probability of “clone #707 is an anthropic reasoner in a green room” is still 1/1000.
On the other hand, if the coin was tails then the probability of “clone #707 is in a green room” is 999/1000. However, clone #707 also knows that “clone #707 is an AR”, and P(#707 is AR | coin was tails and #707 is in a green room) is only 1⁄999.
Therefore, P(#707 is an AR in a green room | coin was tails) is (999/1000) * (1/999) = 1/1000.
But you know that you are AR in the exact same way that you know that you are in a green room. If you’re taking P(BeingInGreenRoom|CoinIsHead)=1/1000, then you must equally take P(AR)=P(AR|CoinIsHead)=P(AR|BeingInGreenRoom)=1/1000.
Why shouldn’t it be 1/1000? The lucky clone who gets to retain AR is picked at random among the entire thousand, not just the ones in the more common type of room.
Doh! Looks like I was reasoning about something I made up myself rather than Jordan’s comment.
I like this example because it has nice tidy prior probabilities. That’s very much lacking in the Doomsday Argument—how do you distribute a prior over a value that has no obvious upper bound? For any finite number of people that will ever live, is there much greater than zero prior probability of that being the number? Even if I can identify something truly special about the reference class “among the first 100 billion people” as opposed to any other mathematically definable group—and thus push down the posterior probabilities of very large numbers of people eventually living—it doesn’t seem to push down very far.