I’m not a fan of Shut up and Multiply where it means taking a maths equation and then just applying it without stopping to think about the assumptions that it is based upon and whether it is appropriate for this context. You can certainly catch errors by writing up a formal proof, but we need to figure out whether our formalisation is appropriate first.
Indeed, Said Achmiz was able to obtain a different answer by formalising the problem differently. So the key question ends up being what is the appropriate formalisation. As I explain on my comment, the question is whether p(survival) is
a) the probability that a pre-war person will survive until after the cold war and observe that he didn’t die, if they will survive apart from a nuclear holocaust (following Stuart Armstrong)
b) the probability that a post-war person will observe that they survived (following Said Achmiz)
Stuart Armstrong is correct because probability problems usually implicitly assume that the agent knows the problem, so a post-war person is already assumed to know that they survived. In other words, b) involves asking someone who already knows that they survived to update on the fact that they survived again. Of course they aren’t going to update!
Concrete example
Anyway, it’ll be easier to understand what is happening here if we make it more concrete. On a gameshow, if a coin comes up heads, the contestants face a dangerous challenge that only a 1⁄3 survive, otherwise they face a safe(r) challenge that 1⁄2 survive. We will assume there are two lots of 6 people and that those who are “eliminated” aren’t actually killed, but just fail to make it to the next round.
This leads us to expect that one group faces the dangerous challenge and one the safe challenge. So overall we expect: Survivors (3 safe, 2 dangerous), Eliminated (2 safe, 4 unsafe). This leads to the following results:
If we survey everyone: The survivors have a higher ratio of people from the safe challenge than those who were eliminated
If we only survey survivors: A disproportional number of survivors come from the safer world
So regardless of which group we consider relevant, we get the result that we claimed above. I’ll consider the complaint that “dead people don’t get asked the question”. If we are asked a conditional probability question like, “What is the chance of scoring at least 10 with two dice if the first dice is a 4?”, then the systematic way to answer that is to list all the (36) possibilities and eliminate all the possibilities that don’t have a 4 for the first dice. Applying this to “If we survived the cold war, what is the probability...” we see that we should begin by eliminating all people who don’t survive the cold war from the set of possibilities. Since we’ve already eliminated the people who die, it doesn’t matter that we can’t ask them questions. How could it? We don’t even want to ask them questions! The only time we need to handle this is when the if statement is correlated with our ability to be asking the question, but doesn’t guarantee it. An example would be if a few people end up in a coma; then we might want to update separately on our ability to be asking the question.
Boltzman Brains
I think you’ve engaged in something of a dodge here. Yes, all of our predictions would be screwed if we are a Boltzman Brain; so if we want to get physics correct we have to hope that that this isn’t the case. However, your version of anthropics require us to hope much harder. In other theories, we just have to hope that the calculations indicating that Boltzman Brains are the most likely scenario are wrong. However, in your anthropics, if the probability of at least one Boltzman Brain with our sensations approaches 1, then we can’t update on our state at all. This holds even if we know that the vast majority of beings with that state aren’t Boltzmann brains. That makes the problem much, much worse than it is under other theories.
Clarifying this with the Tuesday Problem
I think you’ve made the same mistake that I’ve identified here:
A man has two sons. What is the chance that both of them are born on the same day if at least one of them is born on a Tuesday?
Most people expect the answer to be 1⁄7, but the usual answer is that 13⁄49 possibilities have at least one born on a Tuesday and 1⁄49 has both born on Tuesday, so the chance in 1⁄13. Notice that if we had been told, for example, that one of them was born on a Wednesday we would have updated to 1⁄13 as well.
The point is that there is a difference between the following:
a) meeting a random son and noting that he was born on Tuesday
b) discovering that one of the two sons (you don’t know which) was born on a Tuesday
Similarly there is a difference between:
a) Discovering a random consciousness is experiencing a stream of events
b) Discovering that at least one consciousness is experiencing that stream of events
The only reason why this gives the correct answer for the first problem is that (in the simplification) we assume all consciousnesses before the war either survive or all of them die. This makes a) and b) coincide, so that it doesn’t matter which one is used.
As for the Tuesday problem, that seems to go away if you consider the process that told you at least one of them was born on a Tuesday (similar to the Monty Hall problem, depending on how the presenter chooses the door to open). If you model it as “it randomly selected one son and reported the day he was born on”, then that selects Tuesday with twice the probability in the case where the two sons were born on a Tuesday, and this gives you the expected 1⁄7.
“As for the Tuesday problem, that seems to go away if you consider the process that told you at least one of them was born on a Tuesday”—I don’t think we disagree about how the Tuesday problem works. The argument I’m making is that your method of calculating probabilities is calculating b) when we actually care about a).
To bring it back to the Tuesday problem, let’s suppose you’ll meet the first son on Monday and the second on Tuesday, but in between your memory will be wiped. You wake up on a day (not knowing what day it is) and you notice that they are a boy. This observation corresponds to a) meeting a random son and noting that he was born on Tuesday, not b) discovering that one of the two sons (you don’t know which) was born on a Tuesday. Similarly, our observation corresponds to a) not b) for Sleeping Beauty. Admittedly, a) requires indexicals and so isn’t defined in standard probability theory. This doesn’t mean that we should attempt to cram it in, but instead extend the theory.
I’m not a fan of Shut up and Multiply where it means taking a maths equation and then just applying it without stopping to think about the assumptions that it is based upon and whether it is appropriate for this context. You can certainly catch errors by writing up a formal proof, but we need to figure out whether our formalisation is appropriate first.
Indeed, Said Achmiz was able to obtain a different answer by formalising the problem differently. So the key question ends up being what is the appropriate formalisation. As I explain on my comment, the question is whether p(survival) is
a) the probability that a pre-war person will survive until after the cold war and observe that he didn’t die, if they will survive apart from a nuclear holocaust (following Stuart Armstrong)
b) the probability that a post-war person will observe that they survived (following Said Achmiz)
Stuart Armstrong is correct because probability problems usually implicitly assume that the agent knows the problem, so a post-war person is already assumed to know that they survived. In other words, b) involves asking someone who already knows that they survived to update on the fact that they survived again. Of course they aren’t going to update!
Concrete example
Anyway, it’ll be easier to understand what is happening here if we make it more concrete. On a gameshow, if a coin comes up heads, the contestants face a dangerous challenge that only a 1⁄3 survive, otherwise they face a safe(r) challenge that 1⁄2 survive. We will assume there are two lots of 6 people and that those who are “eliminated” aren’t actually killed, but just fail to make it to the next round.
This leads us to expect that one group faces the dangerous challenge and one the safe challenge. So overall we expect: Survivors (3 safe, 2 dangerous), Eliminated (2 safe, 4 unsafe). This leads to the following results:
If we survey everyone: The survivors have a higher ratio of people from the safe challenge than those who were eliminated
If we only survey survivors: A disproportional number of survivors come from the safer world
So regardless of which group we consider relevant, we get the result that we claimed above. I’ll consider the complaint that “dead people don’t get asked the question”. If we are asked a conditional probability question like, “What is the chance of scoring at least 10 with two dice if the first dice is a 4?”, then the systematic way to answer that is to list all the (36) possibilities and eliminate all the possibilities that don’t have a 4 for the first dice. Applying this to “If we survived the cold war, what is the probability...” we see that we should begin by eliminating all people who don’t survive the cold war from the set of possibilities. Since we’ve already eliminated the people who die, it doesn’t matter that we can’t ask them questions. How could it? We don’t even want to ask them questions! The only time we need to handle this is when the if statement is correlated with our ability to be asking the question, but doesn’t guarantee it. An example would be if a few people end up in a coma; then we might want to update separately on our ability to be asking the question.
Boltzman Brains
I think you’ve engaged in something of a dodge here. Yes, all of our predictions would be screwed if we are a Boltzman Brain; so if we want to get physics correct we have to hope that that this isn’t the case. However, your version of anthropics require us to hope much harder. In other theories, we just have to hope that the calculations indicating that Boltzman Brains are the most likely scenario are wrong. However, in your anthropics, if the probability of at least one Boltzman Brain with our sensations approaches 1, then we can’t update on our state at all. This holds even if we know that the vast majority of beings with that state aren’t Boltzmann brains. That makes the problem much, much worse than it is under other theories.
Clarifying this with the Tuesday Problem
I think you’ve made the same mistake that I’ve identified here:
Most people expect the answer to be 1⁄7, but the usual answer is that 13⁄49 possibilities have at least one born on a Tuesday and 1⁄49 has both born on Tuesday, so the chance in 1⁄13. Notice that if we had been told, for example, that one of them was born on a Wednesday we would have updated to 1⁄13 as well.
The point is that there is a difference between the following:
a) meeting a random son and noting that he was born on Tuesday
b) discovering that one of the two sons (you don’t know which) was born on a Tuesday
Similarly there is a difference between:
a) Discovering a random consciousness is experiencing a stream of events
b) Discovering that at least one consciousness is experiencing that stream of events
The only reason why this gives the correct answer for the first problem is that (in the simplification) we assume all consciousnesses before the war either survive or all of them die. This makes a) and b) coincide, so that it doesn’t matter which one is used.
Thanks for the concrete example, and I agree with the Boltzmann brain issue. I’ve actually concluded that no anthropic probability theory works in the presence of duplicates: https://www.lesswrong.com/posts/iNi8bSYexYGn9kiRh/paradoxes-in-all-anthropic-probabilities
It’s all a question of decision theory, not probability.
https://www.lesswrong.com/posts/RcvyJjPQwimAeapNg/torture-vs-dust-vs-the-presumptuous-philosopher-anthropic
https://arxiv.org/abs/1110.6437
https://www.youtube.com/watch?v=aiGOGkBiWEo
As for the Tuesday problem, that seems to go away if you consider the process that told you at least one of them was born on a Tuesday (similar to the Monty Hall problem, depending on how the presenter chooses the door to open). If you model it as “it randomly selected one son and reported the day he was born on”, then that selects Tuesday with twice the probability in the case where the two sons were born on a Tuesday, and this gives you the expected 1⁄7.
“As for the Tuesday problem, that seems to go away if you consider the process that told you at least one of them was born on a Tuesday”—I don’t think we disagree about how the Tuesday problem works. The argument I’m making is that your method of calculating probabilities is calculating b) when we actually care about a).
To bring it back to the Tuesday problem, let’s suppose you’ll meet the first son on Monday and the second on Tuesday, but in between your memory will be wiped. You wake up on a day (not knowing what day it is) and you notice that they are a boy. This observation corresponds to a) meeting a random son and noting that he was born on Tuesday, not b) discovering that one of the two sons (you don’t know which) was born on a Tuesday. Similarly, our observation corresponds to a) not b) for Sleeping Beauty. Admittedly, a) requires indexicals and so isn’t defined in standard probability theory. This doesn’t mean that we should attempt to cram it in, but instead extend the theory.
I’m not sure that can be done: https://www.lesswrong.com/posts/iNi8bSYexYGn9kiRh/paradoxes-in-all-anthropic-probabilities