This is basically the same as C’. The probability of being behind a blue door remains at 99%, both for those who are killed, and for those who survive.
There cannot be a continuous series between the two extremes, since in order to get from one to the other, you have to make some people go from existing in the first case, to not existing in the last case. This implies that they go from knowing something in the first case, to not knowing anything in the last case. If the other people (who always exist) know this fact, then this can affect their subjective probability. If they don’t know, then we’re talking about an entirely different situation.
There is a group of people, and you are clearly not in their group—in fact the first thing you know, and the first thing they know, is that you are not in the same group.
Yet your own subjective probability of being blue-doored depends on what they were told just before being killed. So if an absent minded executioner wanders in and says “maybe I told them, maybe I didn’t -I forget” that “I forget” contains the difference between a 99% and a 50% chance of you being blue-doored.
To push it still further, if there were to be two experiments, side by side—world C″ and world X″ - with world X″ inverting the proportion of red and blue doors, then this type of reasoning would put you in a curious situation. If everyone were first told: “you are a survivor/victim of world C″/X″ with 99% blue/red doors”, and then the situation were explained to them, the above reasoning would imply that you had a 50% chance of being blue-doored whatever world you were in!
Unless you can explain why “being in world C″/X″ ” is a permissible piece of info to put you in a different class, while “you are a survivor/victim” is not, then I can walk the above paradox back down to A (and its inverse, Z), and get 50% odds in situations where they are clearly not justified.
I don’t understand your duplicate world idea well enough to respond to it yet. Do you mean they are told which world they are in, or just that they are told that there are the two worlds, and whether they survive, but not which world they are in?
The basic class idea I am supporting is that in order to count myself as in the same class with someone else, we both have to have access to basically the same probability-affecting information. So I cannot be in the same class with someone who does not exist but might have existed, because he has no access to any information. Similarly, if I am told the situation but he is not, I am not in the same class as him, because I can estimate the probability and he cannot. But the order in which the information is presented should not affect the probability, as long as all of it is presented to everyone. The difference between being a survivor and being a victim (if all are told) clearly does not change your class, because it is not part of the probability-affecting information. As you argued yourself, the probability remains at 99% when you hear this.
Let’s simplify this. Take C, and create a bunch of other observers in another set of rooms. These observers will be killed; it is explained to them that they will be killed, and then the rules of the whole setup, and then they are killed.
Do you feel these extra observers will change anything from the probability perspective.
No. But this is not because these observers are told they will be killed, but because their death does not depend on a coin flip, but is part of the rules. We could suppose that they are rooms with green doors, and after the situation has been explained to them, they know they are in rooms with green doors. But the other observers, whether they are to be killed or not, know that this depends on the coin flip, and they do not know the color of their door, except that it is not green.
Actually, strike that—we haven’t reached the limit of useful argument!
Consider the following scenario: the number of extra observers (that will get killed anyway) is a trillion. Only the extra observers, and the survivors, will be told the rules of the game.
Under your rules, this would mean that the probability of the coin flip is exactly 50-50.
Then, you are told you are not an extra observer, and won’t be killed. There are 1/(trillion + 1) chances that you would be told this if the coin had come up heads, and 99/(trillions + 99) chances if the coin had come up tails. So your posteriori odds are now essentially 99% − 1% again. These trillion extra observers have brought you back close to SIA odds again.
When I said that the extra observers don’t change anything, I meant under the assumption that everyone is told the rules at some point, whether he survives or not. If you assume that some people are not told the rules, I agree that extra observers who are told the rules change the probability, basically for the reason that you are giving.
What I have maintained consistently here is that if you are told the rules, you should consider yourself a random selection from those who are told the rules, and not from anyone else, and you should calculate the probability on this basis. This gives consistent results, and does not have the consequence you gave in the earlier comment (which assumed that I meant to say that extra observers could not change anything whether or not people to be killed were told the rules.)
I get that—I’m just pointing out that your position is not “indifferent to irrelevant information”. In other words, if there a hundred/million/trillion other observers created, who are ultimately not involved in the whole coloured room dilema, their existence changes your odds of being red or green-doored, even after you have been told you are not one of them.
(SIA is indifferent to irrelevant extra observers).
Yes, SIA is indifferent to extra observers, precisely because it assumes I was really lucky to exist and might have found myself not to exist, i.e. it assumes I am a random selection from all possible observers, not just real ones.
Unfortunately for SIA, no one can ever find himself not to exist.
I think we’ve reached the limit of productive argument; the SIA, and the negation of the SIA, are both logically coherent (they are essentially just different priors on your subjective experience of being alive). So I won’t be able to convince you, if I haven’t so far. And I haven’t been convinced.
But do consider the oddity of your position—you claim that if you were told you would survive, told the rules of the set-up, and then the executioner said to you “you know those people who were killed—who never shared the current subjective experience that you have now, and who are dead—well, before they died, I told them/didn’t tell them...” then your probability estimate of your current state would change depending on what he told these dead people.
But you similarly claim that if the executioner said the same thing about the extra observers, then your probability estimate would not change, whatever he said to them.
This is basically the same as C’. The probability of being behind a blue door remains at 99%, both for those who are killed, and for those who survive.
There cannot be a continuous series between the two extremes, since in order to get from one to the other, you have to make some people go from existing in the first case, to not existing in the last case. This implies that they go from knowing something in the first case, to not knowing anything in the last case. If the other people (who always exist) know this fact, then this can affect their subjective probability. If they don’t know, then we’re talking about an entirely different situation.
PS: Thanks for your assiduous attempts to explain your position, it’s very useful.
A rather curious claim, I have to say.
There is a group of people, and you are clearly not in their group—in fact the first thing you know, and the first thing they know, is that you are not in the same group.
Yet your own subjective probability of being blue-doored depends on what they were told just before being killed. So if an absent minded executioner wanders in and says “maybe I told them, maybe I didn’t -I forget” that “I forget” contains the difference between a 99% and a 50% chance of you being blue-doored.
To push it still further, if there were to be two experiments, side by side—world C″ and world X″ - with world X″ inverting the proportion of red and blue doors, then this type of reasoning would put you in a curious situation. If everyone were first told: “you are a survivor/victim of world C″/X″ with 99% blue/red doors”, and then the situation were explained to them, the above reasoning would imply that you had a 50% chance of being blue-doored whatever world you were in!
Unless you can explain why “being in world C″/X″ ” is a permissible piece of info to put you in a different class, while “you are a survivor/victim” is not, then I can walk the above paradox back down to A (and its inverse, Z), and get 50% odds in situations where they are clearly not justified.
I don’t understand your duplicate world idea well enough to respond to it yet. Do you mean they are told which world they are in, or just that they are told that there are the two worlds, and whether they survive, but not which world they are in?
The basic class idea I am supporting is that in order to count myself as in the same class with someone else, we both have to have access to basically the same probability-affecting information. So I cannot be in the same class with someone who does not exist but might have existed, because he has no access to any information. Similarly, if I am told the situation but he is not, I am not in the same class as him, because I can estimate the probability and he cannot. But the order in which the information is presented should not affect the probability, as long as all of it is presented to everyone. The difference between being a survivor and being a victim (if all are told) clearly does not change your class, because it is not part of the probability-affecting information. As you argued yourself, the probability remains at 99% when you hear this.
Let’s simplify this. Take C, and create a bunch of other observers in another set of rooms. These observers will be killed; it is explained to them that they will be killed, and then the rules of the whole setup, and then they are killed.
Do you feel these extra observers will change anything from the probability perspective.
No. But this is not because these observers are told they will be killed, but because their death does not depend on a coin flip, but is part of the rules. We could suppose that they are rooms with green doors, and after the situation has been explained to them, they know they are in rooms with green doors. But the other observers, whether they are to be killed or not, know that this depends on the coin flip, and they do not know the color of their door, except that it is not green.
Actually, strike that—we haven’t reached the limit of useful argument!
Consider the following scenario: the number of extra observers (that will get killed anyway) is a trillion. Only the extra observers, and the survivors, will be told the rules of the game.
Under your rules, this would mean that the probability of the coin flip is exactly 50-50.
Then, you are told you are not an extra observer, and won’t be killed. There are 1/(trillion + 1) chances that you would be told this if the coin had come up heads, and 99/(trillions + 99) chances if the coin had come up tails. So your posteriori odds are now essentially 99% − 1% again. These trillion extra observers have brought you back close to SIA odds again.
When I said that the extra observers don’t change anything, I meant under the assumption that everyone is told the rules at some point, whether he survives or not. If you assume that some people are not told the rules, I agree that extra observers who are told the rules change the probability, basically for the reason that you are giving.
What I have maintained consistently here is that if you are told the rules, you should consider yourself a random selection from those who are told the rules, and not from anyone else, and you should calculate the probability on this basis. This gives consistent results, and does not have the consequence you gave in the earlier comment (which assumed that I meant to say that extra observers could not change anything whether or not people to be killed were told the rules.)
I get that—I’m just pointing out that your position is not “indifferent to irrelevant information”. In other words, if there a hundred/million/trillion other observers created, who are ultimately not involved in the whole coloured room dilema, their existence changes your odds of being red or green-doored, even after you have been told you are not one of them.
(SIA is indifferent to irrelevant extra observers).
Yes, SIA is indifferent to extra observers, precisely because it assumes I was really lucky to exist and might have found myself not to exist, i.e. it assumes I am a random selection from all possible observers, not just real ones.
Unfortunately for SIA, no one can ever find himself not to exist.
I think we’ve reached the limit of productive argument; the SIA, and the negation of the SIA, are both logically coherent (they are essentially just different priors on your subjective experience of being alive). So I won’t be able to convince you, if I haven’t so far. And I haven’t been convinced.
But do consider the oddity of your position—you claim that if you were told you would survive, told the rules of the set-up, and then the executioner said to you “you know those people who were killed—who never shared the current subjective experience that you have now, and who are dead—well, before they died, I told them/didn’t tell them...” then your probability estimate of your current state would change depending on what he told these dead people.
But you similarly claim that if the executioner said the same thing about the extra observers, then your probability estimate would not change, whatever he said to them.