In my last post, I wrote about how the anthropic principle was often misapplied, that it could not be used within a single model, but only for comparing two or more models. This post will explain why I think that the anthropic principle is valid in every case where we aren’t making those mistakes.
There have been many probability problems discussed on this site and one popular viewpoint is that probabilities cannot be discussed as existing by themselves, but only as existing in relation to a series of bets. Imagine that there are two worlds: World A has 10 people and World B has 100. Both worlds have a prior probability of 50% of being correct. Is it the case that World B should instead be given a 10:1 odds due to there being ten times the number of people and the anthropic principle? This sounds surprising, but I would say yes as you’d have to be paid 10 times as much from each person in World A who is correct in order for you to be indifferent between the two worlds. What this means is that if there is a bet that gains or loses you money according to whether you are in world A or world B, you should bet as though the probability of you being in world B is 10 times as much. That doesn’t quite show that the probability is 10:1, but it is rather close. I can’t actually remember the exact process/theorem in order to determine probabilities from betting odds. Can anyone link it to me?
Another way to show that the anthropic principle is probably correct is to note that if world A had 0 people instead, then there would be 100% of observing world B rather than world A. This doesn’t prove much, but it does prove that anthropic effects exist on some level.
Suppose now that world A has 1 person and world B has 1 million people. Maybe you aren’t convinced that you are more likely to observe world B. Let’s consider an equivalent formulation where world A has 1 person who is extremely unobservant and only has a 1 in a million chance of noticing the giant floating A in world A and the other world has a single person, but this time with a 100% chance of noticing the giant floating B in their world. I think it is clear that it is more likely for you to notice a giant floating B than an A.
One more formulation is to have world A have 10 humans and 90 cyborgs and world B to have 100 humans. We can then ask about the probability of being in world B given that you are a human observing the world. It seems clear here that you have 10 times the probability of being in world B than world A given that you are a human. It seems that this should be equivalent to the original problem since the cyborgs don’t change anything.
I admit that none of this is fully rigorous philosophical reasoning, but I thought that I’d post it anyway a) to get feedback b) to see if anyone denied the use of the anthropic principle in this way (not the way described in my last post), which would provide me with more motivation to try making all of this more formal.
Update: I thought it was worth adding that applying the anthropic principle to two models is really very similar to null hypothesis testing to determine if it is likely that a coin is biased. If there are a million people in one possible world, but only one in another, it would seem to be an amazing coincidence for you to be that one.
The Validity of the Anthropic Principle
In my last post, I wrote about how the anthropic principle was often misapplied, that it could not be used within a single model, but only for comparing two or more models. This post will explain why I think that the anthropic principle is valid in every case where we aren’t making those mistakes.
There have been many probability problems discussed on this site and one popular viewpoint is that probabilities cannot be discussed as existing by themselves, but only as existing in relation to a series of bets. Imagine that there are two worlds: World A has 10 people and World B has 100. Both worlds have a prior probability of 50% of being correct. Is it the case that World B should instead be given a 10:1 odds due to there being ten times the number of people and the anthropic principle? This sounds surprising, but I would say yes as you’d have to be paid 10 times as much from each person in World A who is correct in order for you to be indifferent between the two worlds. What this means is that if there is a bet that gains or loses you money according to whether you are in world A or world B, you should bet as though the probability of you being in world B is 10 times as much. That doesn’t quite show that the probability is 10:1, but it is rather close. I can’t actually remember the exact process/theorem in order to determine probabilities from betting odds. Can anyone link it to me?
Another way to show that the anthropic principle is probably correct is to note that if world A had 0 people instead, then there would be 100% of observing world B rather than world A. This doesn’t prove much, but it does prove that anthropic effects exist on some level.
Suppose now that world A has 1 person and world B has 1 million people. Maybe you aren’t convinced that you are more likely to observe world B. Let’s consider an equivalent formulation where world A has 1 person who is extremely unobservant and only has a 1 in a million chance of noticing the giant floating A in world A and the other world has a single person, but this time with a 100% chance of noticing the giant floating B in their world. I think it is clear that it is more likely for you to notice a giant floating B than an A.
One more formulation is to have world A have 10 humans and 90 cyborgs and world B to have 100 humans. We can then ask about the probability of being in world B given that you are a human observing the world. It seems clear here that you have 10 times the probability of being in world B than world A given that you are a human. It seems that this should be equivalent to the original problem since the cyborgs don’t change anything.
I admit that none of this is fully rigorous philosophical reasoning, but I thought that I’d post it anyway a) to get feedback b) to see if anyone denied the use of the anthropic principle in this way (not the way described in my last post), which would provide me with more motivation to try making all of this more formal.
Update: I thought it was worth adding that applying the anthropic principle to two models is really very similar to null hypothesis testing to determine if it is likely that a coin is biased. If there are a million people in one possible world, but only one in another, it would seem to be an amazing coincidence for you to be that one.