I’ve been developing an approach to anthropic questions that I find less confusing than others, which I call Anthropic Atheism (AA). The name is a snarky reference to the ontologically basic status of observers (souls) in other anthropic theories. I’ll have to explain myself.
We’ll start with what I call the “Sherlock Holmes Axiom” (SHA), which will form the epistemic background for my approach:
How often have I said to you that when you have eliminated the impossible, whatever remains, however improbable, must be the truth?
Which I reinterpret as “Reason by eliminating those possibilities inconsistent with your observations. Period.” I use this as a basis of epistemology. Basically, think of all possible world-histories, assign probability to each of them according to whatever principles (eg occams razor), eliminate inconsistencies, and renormalize your probabilities. I won’t go into the details, but it turns out that probability theory (eg Bayes theorem) falls out of this just fine when you translate P(E|H) as “portion of possible worlds consistent with H that predict E”. So it’s not really any different, but using SHA as our basis, I find certain confusing questions less confusing, and certain unholy temptations less tempting.
With that out of the way, let’s have a look at some confusing questions. First up is the Doomsday Argument. From La Wik:
Simply put, it says that supposing the humans alive today are in a random place in the whole human history timeline, chances are we are about halfway through it.
The article goes on to claim that “There is a 95% chance of extinction within 9120 years.” Hard to refute, but nevertheless it makes one rather uncomfortable that the mere fact of one’s existence should have predictive consequences.
In response, Nick Bostrom formulated the “Self Indication Assumption”, which states that “All other things equal, an observer should reason as if they are randomly selected from the set of all possible observers.” Applied to the doomsday argument, it says that you are just as likely to exist in 2014 in a world where humanity grows up to create a glorious everlasting civilization, as one where we wipe ourselves out in the next hundred years, so you can’t update on that mere fact of your existence. This is comforting, as it defuses the doomsday argument.
By contrast, the Doomsday argument is the consequence of the “Self Sampling Assumption”, which states that “All other things equal, an observer should reason as if they are randomly selected from the set of all actually existent observers (past, present and future) in their reference class.”
Unfortunately for SIA, it implies that “Given the fact that you exist, you should (other things equal) favor hypotheses according to which many observers exist over hypotheses on which few observers exist.” Surely that should not follow, but clearly it does. So we can formulate another anthropic problem:
It is the year 2100 and physicists have narrowed down the search for a theory of everything to only two remaining plausible candidate theories, T1 and T2 (using considerations from super-duper symmetry). According to T1 the world is very, very big but finite, and there are a total of a trillion trillion observers in the cosmos. According to T2, the world is very, very, very big but finite, and there are a trillion trillion trillion observers. The super-duper symmetry considerations seem to be roughly indifferent between these two theories. The physicists are planning on carrying out a simple experiment that will falsify one of the theories. Enter the presumptuous philosopher: “Hey guys, it is completely unnecessary for you to do the experiment, because I can already show to you that T2 is about a trillion times more likely to be true than T1
This one is called the “presumptuous philosopher”. Clearly the presumptuous philosopher should not get a Nobel prize.
These questions have caused much psychological distress, and been beaten to death in certain corners of the internet, but as far as I know, few people have satisfactory answers. Wei Dai’s UDT might be satisfactory for this, and might be equivalent to my answer, when the dust settles.
So what’s my objection to these schemes, and what’s my scheme?
My objection is aesthetic; I don’t like that SIA and SSA seem to place some kind of ontological specialness on “observers”. This reminds me way too much of souls, which are nonsense. The whole “reference-class” thing rubs me the wrong way as well. Reference classes are useful tools for statistical approximation, not fundamental features of epistemology. So I’m hesitant to accept these theories.
Instead, I take the position that you can never conclude anything from your own existence except that you exist. That is, I eliminate all hypotheses that don’t predict my existence, and leave it at that, in accordance with SHA. No update happens in the Doomsday Argument; both glorious futures and impending doom are consistent with my existence, their relative probability comes from other reasoning. And the presumptuous philosopher is an idiot because both theories are consistent with us existing, so again we get no relative update.
By reasoning purely from consistency of possible worlds with observations, SHA gives us a reasonably principled way to just punt on these questions. Let’s see how it does on another anthropic question, the Sleeping Beauty Problem:
Sleeping Beauty volunteers to undergo the following experiment and is told all of the following details: On Sunday she will be put to sleep. Once or twice, during the experiment, Beauty will be wakened, interviewed, and put back to sleep with an amnesia-inducing drug that makes her forget that awakening. A fair coin will be tossed to determine which experimental procedure to undertake: if the coin comes up heads, Beauty will be wakened and interviewed on Monday only. If the coin comes up tails, she will be wakened and interviewed on Monday and Tuesday. In either case, she will be wakened on Wednesday without interview and the experiment ends.
Any time Sleeping Beauty is wakened and interviewed, she is asked, “What is your belief now for the proposition that the coin landed heads?”
SHA says that the coin came up heads in half of the worlds, and no further update happens based on existence. I’m slightly uncomfortable with this, because SHA is cheerfully biting a bullet that has confused many philosophers. However, I see no reason not to bite this bullet; it doesn’t seem to have any particularly controversial implications for actual decision making. If she is paid for each correct guess, for example, she’ll say that she thinks the coin came up tails (this way she gets $2 half the time instead of $1 half the time for heads). If she’s paid only on Monday, she’s indifferent between the options, as she should be.
What if we modify the problem slightly, and ask sleeping beauty for her credence that it’s Monday? That is, her credence that “it” “is” Monday. If the coin came up heads, there is only Monday, but if it came up tails, there is a Monday observer and a Tuesday observer. AA/SHA reasons purely from the perspective of possible worlds, and says that Monday is consistent with observations, as is Tuesday, and refuses to speculate further on which “observer” among possible observers she “is”. Again, given an actual decision problem with an actual payoff structure, AA/SHA will quickly reach the correct decision, even while refusing to assign probabilities “between observers”.
I’d like to say that we’ve casually thrown out probability theory when it became inconvenient, but we haven’t; we’ve just refused to answer a meaningless question. The meaninglessness of indexical uncertainty becomes apparent when you stop believing in the specialness of observers. It’s like asking “What’s the probability that the Sun rather than the Earth?”. That the Sun what? The Sun and the Earth both exist, for example, but maybe you meant something else. Want to know which one this here comet is going to hit? Sure I’ll answer that, but these generic “which one” questions are meaningless.
Not that I’m familiar with UDT, but this really is starting to remind me of UDT. Perhaps it even is part of UDT. In any case, Anthropic Atheism seems to easily give intuitive answers to anthropic questions. Maybe it breaks down on some edge case, though. If so, I’d like to see it. In the mean time, I don’t believe in observers.
ADDENDUM: As Wei Dai, DanielLC, and Tyrrell_McAllister point out below, it turns out this doesn’t actually work. The objection is that by refusing to include the indexical hypothesis, we end up favoring universes with more variety of experiences (because they have a high chance of containing *our* experiences) and sacrificing the ability to predict much of anything. Oops. It was fun while it lasted ;)
Anthropic Atheism
(Crossposted from my blog)
I’ve been developing an approach to anthropic questions that I find less confusing than others, which I call Anthropic Atheism (AA). The name is a snarky reference to the ontologically basic status of observers (souls) in other anthropic theories. I’ll have to explain myself.
We’ll start with what I call the “Sherlock Holmes Axiom” (SHA), which will form the epistemic background for my approach:
Which I reinterpret as “Reason by eliminating those possibilities inconsistent with your observations. Period.” I use this as a basis of epistemology. Basically, think of all possible world-histories, assign probability to each of them according to whatever principles (eg occams razor), eliminate inconsistencies, and renormalize your probabilities. I won’t go into the details, but it turns out that probability theory (eg Bayes theorem) falls out of this just fine when you translate
P(E|H)
as “portion of possible worlds consistent with H that predict E”. So it’s not really any different, but using SHA as our basis, I find certain confusing questions less confusing, and certain unholy temptations less tempting.With that out of the way, let’s have a look at some confusing questions. First up is the Doomsday Argument. From La Wik:
The article goes on to claim that “There is a 95% chance of extinction within 9120 years.” Hard to refute, but nevertheless it makes one rather uncomfortable that the mere fact of one’s existence should have predictive consequences.
In response, Nick Bostrom formulated the “Self Indication Assumption”, which states that “All other things equal, an observer should reason as if they are randomly selected from the set of all possible observers.” Applied to the doomsday argument, it says that you are just as likely to exist in 2014 in a world where humanity grows up to create a glorious everlasting civilization, as one where we wipe ourselves out in the next hundred years, so you can’t update on that mere fact of your existence. This is comforting, as it defuses the doomsday argument.
By contrast, the Doomsday argument is the consequence of the “Self Sampling Assumption”, which states that “All other things equal, an observer should reason as if they are randomly selected from the set of all actually existent observers (past, present and future) in their reference class.”
Unfortunately for SIA, it implies that “Given the fact that you exist, you should (other things equal) favor hypotheses according to which many observers exist over hypotheses on which few observers exist.” Surely that should not follow, but clearly it does. So we can formulate another anthropic problem:
This one is called the “presumptuous philosopher”. Clearly the presumptuous philosopher should not get a Nobel prize.
These questions have caused much psychological distress, and been beaten to death in certain corners of the internet, but as far as I know, few people have satisfactory answers. Wei Dai’s UDT might be satisfactory for this, and might be equivalent to my answer, when the dust settles.
So what’s my objection to these schemes, and what’s my scheme?
My objection is aesthetic; I don’t like that SIA and SSA seem to place some kind of ontological specialness on “observers”. This reminds me way too much of souls, which are nonsense. The whole “reference-class” thing rubs me the wrong way as well. Reference classes are useful tools for statistical approximation, not fundamental features of epistemology. So I’m hesitant to accept these theories.
Instead, I take the position that you can never conclude anything from your own existence except that you exist. That is, I eliminate all hypotheses that don’t predict my existence, and leave it at that, in accordance with SHA. No update happens in the Doomsday Argument; both glorious futures and impending doom are consistent with my existence, their relative probability comes from other reasoning. And the presumptuous philosopher is an idiot because both theories are consistent with us existing, so again we get no relative update.
By reasoning purely from consistency of possible worlds with observations, SHA gives us a reasonably principled way to just punt on these questions. Let’s see how it does on another anthropic question, the Sleeping Beauty Problem:
SHA says that the coin came up heads in half of the worlds, and no further update happens based on existence. I’m slightly uncomfortable with this, because SHA is cheerfully biting a bullet that has confused many philosophers. However, I see no reason not to bite this bullet; it doesn’t seem to have any particularly controversial implications for actual decision making. If she is paid for each correct guess, for example, she’ll say that she thinks the coin came up tails (this way she gets $2 half the time instead of $1 half the time for heads). If she’s paid only on Monday, she’s indifferent between the options, as she should be.
What if we modify the problem slightly, and ask sleeping beauty for her credence that it’s Monday? That is, her credence that “it” “is” Monday. If the coin came up heads, there is only Monday, but if it came up tails, there is a Monday observer and a Tuesday observer. AA/SHA reasons purely from the perspective of possible worlds, and says that Monday is consistent with observations, as is Tuesday, and refuses to speculate further on which “observer” among possible observers she “is”. Again, given an actual decision problem with an actual payoff structure, AA/SHA will quickly reach the correct decision, even while refusing to assign probabilities “between observers”.
I’d like to say that we’ve casually thrown out probability theory when it became inconvenient, but we haven’t; we’ve just refused to answer a meaningless question. The meaninglessness of indexical uncertainty becomes apparent when you stop believing in the specialness of observers. It’s like asking “What’s the probability that the Sun rather than the Earth?”. That the Sun what? The Sun and the Earth both exist, for example, but maybe you meant something else. Want to know which one this here comet is going to hit? Sure I’ll answer that, but these generic “which one” questions are meaningless.
Not that I’m familiar with UDT, but this really is starting to remind me of UDT. Perhaps it even is part of UDT. In any case, Anthropic Atheism seems to easily give intuitive answers to anthropic questions. Maybe it breaks down on some edge case, though. If so, I’d like to see it. In the mean time, I don’t believe in observers.
ADDENDUM: As Wei Dai, DanielLC, and Tyrrell_McAllister point out below, it turns out this doesn’t actually work. The objection is that by refusing to include the indexical hypothesis, we end up favoring universes with more variety of experiences (because they have a high chance of containing *our* experiences) and sacrificing the ability to predict much of anything. Oops. It was fun while it lasted ;)