At case D, your probability changes from 99% to 50%, because only people who survive are ever in the situation of knowing about the situation; in other words there is a 50% chance that only red doored people know, and a 50% chance that only blue doored people know.
After that, the probability remains at 50% all the way through.
The fact that no one has mentioned this in 44 comments is a sign of an incredibly strong wishful thinking, simply “wanting” the Doomsday argument to be incorrect.
Then put a situation C’ between C and D, in which people who are to be killed will be informed about the situation just before being killed (the survivors are still only told after the fact).
Then how does telling these people something just before putting them to death change anything for the survivors?
In C’, the probability of being behind a blue door remains at 99% (as you wished it to), both for whoever is killed, and for the survivor(s). But the reason for this is that everyone finds out all the facts, and the survivor(s) know that even if the coin flip had went the other way, they would have known the facts, only before being killed, while those who are killed know that they would have known the facts afterward, if the coin flip had went the other way.
Telling the people something just before death changes something for the survivors, because the survivors are told that the other people are told something. This additional knowledge changes the subjective estimate of the survivors (in comparison to what it would be if they were told that the non-survivors are not told anything.)
In case D, on the other hand, all the survivors know that only survivors ever know the situation, and so they assign a 50% probability to being behind a blue door.
I don’t see it. In D, you are informed that 100 people were created, separated in two groups, and each of them had then 50% chance of survival. You survived. So calculate the probability and
P(red|survival)=P(survival and red)/P(survival)=0.005/0.5=1%.
This calculation is incorrect because “you” are by definition someone who has survived (in case D, where the non-survivors never know about it); had the coin flip went the other way, “you” would have been chosen from the other survivors. So you can’t update on survival in that way.
You do update on survival, but like this: you know there were two groups of people, each of which had a 50% chance of surviving. You survived. So there is a 50% chance you are in one group, and a 50% chance you are in the other.
had the coin flip went the other way, “you” would have been chosen from the other survivors
Thanks for explanation. The disagreement apparently stems from different ideas about over what set of possibilities one spans the uniform distribution.
I prefer such reasoning: There is a set of people existing at least at some moment in the history of the universe, and the creator assigns “your” consciousness to one of these people with uniform distribution. But this would allow me to update on survival exactly the way I did. However, the smooth transition would break between E and F.
What you describe, as I understand, is that the assignment is done with uniform distribution not over people ever existing, but over people existing in the moment when they are told the rules (so people who are never told the rules don’t count). This seems to me pretty arbitrary and hard to generalise (and also dangerously close to survivorship bias).
In case of SIA, the uniform distribution is extended to cover the set of hypothetically existing people, too. Do I understand it correctly?
Right, SIA assumes that you are a random observer from the set of all possible observers, and so it follows that worlds with more real people are more likely to contain you.
This is clearly unreasonable, because “you” could not have found yourself to be one of the non-real people. “You” is just a name for whoever finds himself to be real. This is why you should consider yourself a random selection from the real people.
In the particular case under consideration, you should consider yourself a random selection from the people who are told the rules. This is because only those people can estimate the probability; in as much as you estimate the probability, you could not possibly have found yourself to be one of those who are not told the rules.
That’s a complicated question, because in this case your estimate will depend on your estimate of the reasons why you were selected as the one to know the rules. If you are 100% certain that you were randomly selected out of all the persons, and it could have been a person killed who was told the rules (before he was killed), then your probability of being behind a blue door will be 99%.
If you are 100% certain that you were deliberately chosen as a survivor, and if someone else had survived and you had not, the other would have been told the rules and not you, then your probability will be 50%.
To the degree that you are uncertain about how the choice was made, your probability will be somewhere between these two values.
You could have been one of those who didn’t learn the rules, you just wouldn’t have found out about it. Why doesn’t the fact that this didn’t happen tell you anything?
What is your feeling in the case where the victims are first told they will be killed, then the situation is explained to them and finally they are killed?
Similarly, the survivors are first told they will survive, and then the situation is explained to them.
This is basically the same as C’. The probability of being behind a blue door remains at 99%, both for those who are killed, and for those who survive.
There cannot be a continuous series between the two extremes, since in order to get from one to the other, you have to make some people go from existing in the first case, to not existing in the last case. This implies that they go from knowing something in the first case, to not knowing anything in the last case. If the other people (who always exist) know this fact, then this can affect their subjective probability. If they don’t know, then we’re talking about an entirely different situation.
There is a group of people, and you are clearly not in their group—in fact the first thing you know, and the first thing they know, is that you are not in the same group.
Yet your own subjective probability of being blue-doored depends on what they were told just before being killed. So if an absent minded executioner wanders in and says “maybe I told them, maybe I didn’t -I forget” that “I forget” contains the difference between a 99% and a 50% chance of you being blue-doored.
To push it still further, if there were to be two experiments, side by side—world C″ and world X″ - with world X″ inverting the proportion of red and blue doors, then this type of reasoning would put you in a curious situation. If everyone were first told: “you are a survivor/victim of world C″/X″ with 99% blue/red doors”, and then the situation were explained to them, the above reasoning would imply that you had a 50% chance of being blue-doored whatever world you were in!
Unless you can explain why “being in world C″/X″ ” is a permissible piece of info to put you in a different class, while “you are a survivor/victim” is not, then I can walk the above paradox back down to A (and its inverse, Z), and get 50% odds in situations where they are clearly not justified.
I don’t understand your duplicate world idea well enough to respond to it yet. Do you mean they are told which world they are in, or just that they are told that there are the two worlds, and whether they survive, but not which world they are in?
The basic class idea I am supporting is that in order to count myself as in the same class with someone else, we both have to have access to basically the same probability-affecting information. So I cannot be in the same class with someone who does not exist but might have existed, because he has no access to any information. Similarly, if I am told the situation but he is not, I am not in the same class as him, because I can estimate the probability and he cannot. But the order in which the information is presented should not affect the probability, as long as all of it is presented to everyone. The difference between being a survivor and being a victim (if all are told) clearly does not change your class, because it is not part of the probability-affecting information. As you argued yourself, the probability remains at 99% when you hear this.
Let’s simplify this. Take C, and create a bunch of other observers in another set of rooms. These observers will be killed; it is explained to them that they will be killed, and then the rules of the whole setup, and then they are killed.
Do you feel these extra observers will change anything from the probability perspective.
No. But this is not because these observers are told they will be killed, but because their death does not depend on a coin flip, but is part of the rules. We could suppose that they are rooms with green doors, and after the situation has been explained to them, they know they are in rooms with green doors. But the other observers, whether they are to be killed or not, know that this depends on the coin flip, and they do not know the color of their door, except that it is not green.
Actually, strike that—we haven’t reached the limit of useful argument!
Consider the following scenario: the number of extra observers (that will get killed anyway) is a trillion. Only the extra observers, and the survivors, will be told the rules of the game.
Under your rules, this would mean that the probability of the coin flip is exactly 50-50.
Then, you are told you are not an extra observer, and won’t be killed. There are 1/(trillion + 1) chances that you would be told this if the coin had come up heads, and 99/(trillions + 99) chances if the coin had come up tails. So your posteriori odds are now essentially 99% − 1% again. These trillion extra observers have brought you back close to SIA odds again.
When I said that the extra observers don’t change anything, I meant under the assumption that everyone is told the rules at some point, whether he survives or not. If you assume that some people are not told the rules, I agree that extra observers who are told the rules change the probability, basically for the reason that you are giving.
What I have maintained consistently here is that if you are told the rules, you should consider yourself a random selection from those who are told the rules, and not from anyone else, and you should calculate the probability on this basis. This gives consistent results, and does not have the consequence you gave in the earlier comment (which assumed that I meant to say that extra observers could not change anything whether or not people to be killed were told the rules.)
I get that—I’m just pointing out that your position is not “indifferent to irrelevant information”. In other words, if there a hundred/million/trillion other observers created, who are ultimately not involved in the whole coloured room dilema, their existence changes your odds of being red or green-doored, even after you have been told you are not one of them.
(SIA is indifferent to irrelevant extra observers).
Yes, SIA is indifferent to extra observers, precisely because it assumes I was really lucky to exist and might have found myself not to exist, i.e. it assumes I am a random selection from all possible observers, not just real ones.
Unfortunately for SIA, no one can ever find himself not to exist.
I think we’ve reached the limit of productive argument; the SIA, and the negation of the SIA, are both logically coherent (they are essentially just different priors on your subjective experience of being alive). So I won’t be able to convince you, if I haven’t so far. And I haven’t been convinced.
But do consider the oddity of your position—you claim that if you were told you would survive, told the rules of the set-up, and then the executioner said to you “you know those people who were killed—who never shared the current subjective experience that you have now, and who are dead—well, before they died, I told them/didn’t tell them...” then your probability estimate of your current state would change depending on what he told these dead people.
But you similarly claim that if the executioner said the same thing about the extra observers, then your probability estimate would not change, whatever he said to them.
The manner in C’ depends on your reference class. If your reference class is everyone, then it remains 99%. If your reference class is survivors, then it becomes 50%.
I don’t think it is arbitrary. I responded to that argument in the comment chain here and still agree with that. (I am the same person as user Unknowns but changed my username some time ago.)
At case D, your probability changes from 99% to 50%, because only people who survive are ever in the situation of knowing about the situation; in other words there is a 50% chance that only red doored people know, and a 50% chance that only blue doored people know.
After that, the probability remains at 50% all the way through.
The fact that no one has mentioned this in 44 comments is a sign of an incredibly strong wishful thinking, simply “wanting” the Doomsday argument to be incorrect.
Then put a situation C’ between C and D, in which people who are to be killed will be informed about the situation just before being killed (the survivors are still only told after the fact).
Then how does telling these people something just before putting them to death change anything for the survivors?
In C’, the probability of being behind a blue door remains at 99% (as you wished it to), both for whoever is killed, and for the survivor(s). But the reason for this is that everyone finds out all the facts, and the survivor(s) know that even if the coin flip had went the other way, they would have known the facts, only before being killed, while those who are killed know that they would have known the facts afterward, if the coin flip had went the other way.
Telling the people something just before death changes something for the survivors, because the survivors are told that the other people are told something. This additional knowledge changes the subjective estimate of the survivors (in comparison to what it would be if they were told that the non-survivors are not told anything.)
In case D, on the other hand, all the survivors know that only survivors ever know the situation, and so they assign a 50% probability to being behind a blue door.
I don’t see it. In D, you are informed that 100 people were created, separated in two groups, and each of them had then 50% chance of survival. You survived. So calculate the probability and
P(red|survival)=P(survival and red)/P(survival)=0.005/0.5=1%.
Not 50%.
This calculation is incorrect because “you” are by definition someone who has survived (in case D, where the non-survivors never know about it); had the coin flip went the other way, “you” would have been chosen from the other survivors. So you can’t update on survival in that way.
You do update on survival, but like this: you know there were two groups of people, each of which had a 50% chance of surviving. You survived. So there is a 50% chance you are in one group, and a 50% chance you are in the other.
had the coin flip went the other way, “you” would have been chosen from the other survivors
Thanks for explanation. The disagreement apparently stems from different ideas about over what set of possibilities one spans the uniform distribution.
I prefer such reasoning: There is a set of people existing at least at some moment in the history of the universe, and the creator assigns “your” consciousness to one of these people with uniform distribution. But this would allow me to update on survival exactly the way I did. However, the smooth transition would break between E and F.
What you describe, as I understand, is that the assignment is done with uniform distribution not over people ever existing, but over people existing in the moment when they are told the rules (so people who are never told the rules don’t count). This seems to me pretty arbitrary and hard to generalise (and also dangerously close to survivorship bias).
In case of SIA, the uniform distribution is extended to cover the set of hypothetically existing people, too. Do I understand it correctly?
Right, SIA assumes that you are a random observer from the set of all possible observers, and so it follows that worlds with more real people are more likely to contain you.
This is clearly unreasonable, because “you” could not have found yourself to be one of the non-real people. “You” is just a name for whoever finds himself to be real. This is why you should consider yourself a random selection from the real people.
In the particular case under consideration, you should consider yourself a random selection from the people who are told the rules. This is because only those people can estimate the probability; in as much as you estimate the probability, you could not possibly have found yourself to be one of those who are not told the rules.
So, what if the setting is the same as in B or C, except that “you” know that only “you” are told the rules?
That’s a complicated question, because in this case your estimate will depend on your estimate of the reasons why you were selected as the one to know the rules. If you are 100% certain that you were randomly selected out of all the persons, and it could have been a person killed who was told the rules (before he was killed), then your probability of being behind a blue door will be 99%.
If you are 100% certain that you were deliberately chosen as a survivor, and if someone else had survived and you had not, the other would have been told the rules and not you, then your probability will be 50%.
To the degree that you are uncertain about how the choice was made, your probability will be somewhere between these two values.
You could have been one of those who didn’t learn the rules, you just wouldn’t have found out about it. Why doesn’t the fact that this didn’t happen tell you anything?
What is your feeling in the case where the victims are first told they will be killed, then the situation is explained to them and finally they are killed?
Similarly, the survivors are first told they will survive, and then the situation is explained to them.
This is basically the same as C’. The probability of being behind a blue door remains at 99%, both for those who are killed, and for those who survive.
There cannot be a continuous series between the two extremes, since in order to get from one to the other, you have to make some people go from existing in the first case, to not existing in the last case. This implies that they go from knowing something in the first case, to not knowing anything in the last case. If the other people (who always exist) know this fact, then this can affect their subjective probability. If they don’t know, then we’re talking about an entirely different situation.
PS: Thanks for your assiduous attempts to explain your position, it’s very useful.
A rather curious claim, I have to say.
There is a group of people, and you are clearly not in their group—in fact the first thing you know, and the first thing they know, is that you are not in the same group.
Yet your own subjective probability of being blue-doored depends on what they were told just before being killed. So if an absent minded executioner wanders in and says “maybe I told them, maybe I didn’t -I forget” that “I forget” contains the difference between a 99% and a 50% chance of you being blue-doored.
To push it still further, if there were to be two experiments, side by side—world C″ and world X″ - with world X″ inverting the proportion of red and blue doors, then this type of reasoning would put you in a curious situation. If everyone were first told: “you are a survivor/victim of world C″/X″ with 99% blue/red doors”, and then the situation were explained to them, the above reasoning would imply that you had a 50% chance of being blue-doored whatever world you were in!
Unless you can explain why “being in world C″/X″ ” is a permissible piece of info to put you in a different class, while “you are a survivor/victim” is not, then I can walk the above paradox back down to A (and its inverse, Z), and get 50% odds in situations where they are clearly not justified.
I don’t understand your duplicate world idea well enough to respond to it yet. Do you mean they are told which world they are in, or just that they are told that there are the two worlds, and whether they survive, but not which world they are in?
The basic class idea I am supporting is that in order to count myself as in the same class with someone else, we both have to have access to basically the same probability-affecting information. So I cannot be in the same class with someone who does not exist but might have existed, because he has no access to any information. Similarly, if I am told the situation but he is not, I am not in the same class as him, because I can estimate the probability and he cannot. But the order in which the information is presented should not affect the probability, as long as all of it is presented to everyone. The difference between being a survivor and being a victim (if all are told) clearly does not change your class, because it is not part of the probability-affecting information. As you argued yourself, the probability remains at 99% when you hear this.
Let’s simplify this. Take C, and create a bunch of other observers in another set of rooms. These observers will be killed; it is explained to them that they will be killed, and then the rules of the whole setup, and then they are killed.
Do you feel these extra observers will change anything from the probability perspective.
No. But this is not because these observers are told they will be killed, but because their death does not depend on a coin flip, but is part of the rules. We could suppose that they are rooms with green doors, and after the situation has been explained to them, they know they are in rooms with green doors. But the other observers, whether they are to be killed or not, know that this depends on the coin flip, and they do not know the color of their door, except that it is not green.
Actually, strike that—we haven’t reached the limit of useful argument!
Consider the following scenario: the number of extra observers (that will get killed anyway) is a trillion. Only the extra observers, and the survivors, will be told the rules of the game.
Under your rules, this would mean that the probability of the coin flip is exactly 50-50.
Then, you are told you are not an extra observer, and won’t be killed. There are 1/(trillion + 1) chances that you would be told this if the coin had come up heads, and 99/(trillions + 99) chances if the coin had come up tails. So your posteriori odds are now essentially 99% − 1% again. These trillion extra observers have brought you back close to SIA odds again.
When I said that the extra observers don’t change anything, I meant under the assumption that everyone is told the rules at some point, whether he survives or not. If you assume that some people are not told the rules, I agree that extra observers who are told the rules change the probability, basically for the reason that you are giving.
What I have maintained consistently here is that if you are told the rules, you should consider yourself a random selection from those who are told the rules, and not from anyone else, and you should calculate the probability on this basis. This gives consistent results, and does not have the consequence you gave in the earlier comment (which assumed that I meant to say that extra observers could not change anything whether or not people to be killed were told the rules.)
I get that—I’m just pointing out that your position is not “indifferent to irrelevant information”. In other words, if there a hundred/million/trillion other observers created, who are ultimately not involved in the whole coloured room dilema, their existence changes your odds of being red or green-doored, even after you have been told you are not one of them.
(SIA is indifferent to irrelevant extra observers).
Yes, SIA is indifferent to extra observers, precisely because it assumes I was really lucky to exist and might have found myself not to exist, i.e. it assumes I am a random selection from all possible observers, not just real ones.
Unfortunately for SIA, no one can ever find himself not to exist.
I think we’ve reached the limit of productive argument; the SIA, and the negation of the SIA, are both logically coherent (they are essentially just different priors on your subjective experience of being alive). So I won’t be able to convince you, if I haven’t so far. And I haven’t been convinced.
But do consider the oddity of your position—you claim that if you were told you would survive, told the rules of the set-up, and then the executioner said to you “you know those people who were killed—who never shared the current subjective experience that you have now, and who are dead—well, before they died, I told them/didn’t tell them...” then your probability estimate of your current state would change depending on what he told these dead people.
But you similarly claim that if the executioner said the same thing about the extra observers, then your probability estimate would not change, whatever he said to them.
The manner in C’ depends on your reference class. If your reference class is everyone, then it remains 99%. If your reference class is survivors, then it becomes 50%.
Which shows how odd and arbitrary reference classes are.
I don’t think it is arbitrary. I responded to that argument in the comment chain here and still agree with that. (I am the same person as user Unknowns but changed my username some time ago.)