Is there any way to do anthropic reasoning besides SIA and SSA? This includes anything you might call “not doing anthropic reasoning” as long as it isn’t self-contradictory.
Yes. My post gives a somewhat muddled explanation of what I think the right way is. The key idea is that I treat “waking up in one of the possible worlds” as a new piece of information that tells you that the possibilities are now mutually exclusive, but doesn’t tell you anything else.
I’m having a hard time understanding it. You seem to be saying that {T, Monday}, and {T, Tuesday} are either both true or both false before, but mutually exclusive after. If you mean them ever happening, they’d be both true or both false either way. If you mean them happening now, they’d be both false before and mutually exclusive after. Were you talking about the probability that they ever happen before the experiment, and the probability that they’re happening now during? If so, you should mark that by calling them something like {T, Monday, Ever} and {T, Monday, Today}. Thus {T, Monday, Ever} and {T, Tuesday, Ever} are either both true or both false, but {T, Monday, Today} and {T, Tuesday, Today} can occur in any combination except both true.
If you wake up the day before the experiment, you eliminate {T, Monday, Today}, {T, Tuesday Today} and {H, Monday, Today}. If you wake up during the experiment, you eliminate {T, Sunday, Today} and {H, Sunday, Today}. This looks like SIA.
You seem to be saying that {T, Monday}, and {T, Tuesday} are either both true or both false before, but mutually exclusive after.
You’ve hit one of the points I muddled before :D There are two different questions—“what day will I wake up” vs. “what day is it” basically. But there’s an alternative: “what day will I wake (or have woken) up given that I just woke up?” Phrased like this, you can see how Sleeping Beauty’s question can be produced by adding information to the question before the experiment.
This looks like SIA.
Anything that is “SIA” is also “SSA,” since SIA can be produced by adding on more information. For example, take the distribution given by “you are an observer who is a human.”
If you use SSA correctly, you will take SSA’s prior distribution (given by “you are an observer”) and then update on it with “who is a human” to get the final distribution.
If you use something more like SIA correctly, you’ll just return your prior distribution (given by “you are an observer who is a human”).
It’s still possible to get into trouble if you start with SIA (“you are an observer who is a human who exists”) and are not in fact a human. This is because you can add information, but the only way to take it away is to start over without SIA.
The moral of this story is that, if the right answer to something includes SIA information (“you are a human who exists”), the disagreement cannot be “SIA vs. SSA,” since starting with SSA would produce the right answer too, it just takes an extra step. At least one person is simply doing probability wrong (though it’s always good to remember “Just because you two are arguing, doesn’t mean one of you is right”). I do blame the labels “SIA” and “SSA” somewhat for this phenomenon, since labels sometimes mean people are excused from thinking.
Anything that is “SIA” is also “SSA,” since SIA can be produced by adding on more information.
No it’s not. If you add on “you are an observer who is a human”, you update with the universes with human observers being more likely. They do work the same if you use different priors. Namely: make universes with more people proportionally more likely in SSA, or less likely in SIA, and just use “conscious observer” as your reference class, and you’d get the same thing.
Probabilities come from information. So priors come from your starting information. Which information you label “starting” is completely arbitrary, however—you can’t get different answers just by relabeling what you know.
So any problem that you can look at from an SIA perspective, you can also take a step back and look at it from an SSA perspective—it just means labeling less information “starting” information.
Is there any way to do anthropic reasoning besides SIA and SSA? This includes anything you might call “not doing anthropic reasoning” as long as it isn’t self-contradictory.
Yes. My post gives a somewhat muddled explanation of what I think the right way is. The key idea is that I treat “waking up in one of the possible worlds” as a new piece of information that tells you that the possibilities are now mutually exclusive, but doesn’t tell you anything else.
I’m having a hard time understanding it. You seem to be saying that {T, Monday}, and {T, Tuesday} are either both true or both false before, but mutually exclusive after. If you mean them ever happening, they’d be both true or both false either way. If you mean them happening now, they’d be both false before and mutually exclusive after. Were you talking about the probability that they ever happen before the experiment, and the probability that they’re happening now during? If so, you should mark that by calling them something like {T, Monday, Ever} and {T, Monday, Today}. Thus {T, Monday, Ever} and {T, Tuesday, Ever} are either both true or both false, but {T, Monday, Today} and {T, Tuesday, Today} can occur in any combination except both true.
If you wake up the day before the experiment, you eliminate {T, Monday, Today}, {T, Tuesday Today} and {H, Monday, Today}. If you wake up during the experiment, you eliminate {T, Sunday, Today} and {H, Sunday, Today}. This looks like SIA.
Am I missing something?
Haha, a downvote.
Anyhow.
You’ve hit one of the points I muddled before :D There are two different questions—“what day will I wake up” vs. “what day is it” basically. But there’s an alternative: “what day will I wake (or have woken) up given that I just woke up?” Phrased like this, you can see how Sleeping Beauty’s question can be produced by adding information to the question before the experiment.
Anything that is “SIA” is also “SSA,” since SIA can be produced by adding on more information. For example, take the distribution given by “you are an observer who is a human.”
If you use SSA correctly, you will take SSA’s prior distribution (given by “you are an observer”) and then update on it with “who is a human” to get the final distribution.
If you use something more like SIA correctly, you’ll just return your prior distribution (given by “you are an observer who is a human”).
It’s still possible to get into trouble if you start with SIA (“you are an observer who is a human who exists”) and are not in fact a human. This is because you can add information, but the only way to take it away is to start over without SIA.
The moral of this story is that, if the right answer to something includes SIA information (“you are a human who exists”), the disagreement cannot be “SIA vs. SSA,” since starting with SSA would produce the right answer too, it just takes an extra step. At least one person is simply doing probability wrong (though it’s always good to remember “Just because you two are arguing, doesn’t mean one of you is right”). I do blame the labels “SIA” and “SSA” somewhat for this phenomenon, since labels sometimes mean people are excused from thinking.
No it’s not. If you add on “you are an observer who is a human”, you update with the universes with human observers being more likely. They do work the same if you use different priors. Namely: make universes with more people proportionally more likely in SSA, or less likely in SIA, and just use “conscious observer” as your reference class, and you’d get the same thing.
I still don’t see how your method works.
Probabilities come from information. So priors come from your starting information. Which information you label “starting” is completely arbitrary, however—you can’t get different answers just by relabeling what you know.
So any problem that you can look at from an SIA perspective, you can also take a step back and look at it from an SSA perspective—it just means labeling less information “starting” information.