If I know nothing about the boxes except that they have the same a priori probability of exploding and killing me, then I am indifferent between the two black boxes.
It is not terribly difficult to craft counter-intuitive examples of the principle. I anticipated I would be presented with such examples (because this is not my first time discussing this topic), which is why in my original comment I wrote, “its counter-intuitiveness is not by itself a strong reason to disbelieve it,” and the rest of that paragraph.
Let each black box have some probability to kill you, uniformly chosen from a set of possible probabilities. Let’s start with a simple one: that probability is 0 or 1.
The a prior chance to kill you is .5.
After the box doesn’t kill you, you update, and now the chance is 0.
What about if we use a uniform distribution from [0,1)? Some boxes are .3 to kill you, others .78.
Far more of the experiences of not dying are from the low p-kill boxes than from the high p-kill ones. When people select the same box, instead of a new one, after not being killed, that brings the average kill rate of selected boxes down. Run this experiment for long enough, and the only boxes still being selected are the extremely low p-kill boxes that haven’t killed all their subjects yet.
This time, could you make a stronger objection, that’s more directly addressed at my counter-example?
In your new scenario, if I understand correctly, you have postulated that one box always explodes and one never explodes; I must undergo 2 experiences: the first experience is with one of the boxes, picked at random; then I get to choose whether my second experience is with the same box or whether it is with the other box. But I don’t need to know the outcome of the first experience to know that I want to limit my exposure to just one of these dangerous boxes: I will always choose to undergo the second experience with the same box as I underwent the first one with. Note that I arrived at this choice without doing the thing that I have been warning people not to do, namely, to update on observation X when I know it would have been impossible for me to survive (or more precisely for my rationality, my ability to have and to refine a model of reality, to survive) the observation not X.
That takes care of the first of your two new scenarios. In your second new scenario, I have a .5 chance of dying during my first experience. Then I may choose whether my second experience is with the same box or a new one. Before I make my choice, I would dearly love to experiment with either box in a setting in which I could survive the box’s exploding. But by your postulate as I understand it, that is not possible, so I am indifferent about which box I have my second experience with: either way I choose, my probability that I will die during the second experience is .5.
Note the in your previous comment, in which there was some P such each time a box is used, it has a probability P of exploding, there is no benefit to my being able to experiment with a box in a setting in which I could survive an explosion, but in the scenario we are considering now there is a huge benefit.
Suppose my best friend is observing the scenario from a safe distance: he can see what is happening, but is protected from any exploding box. My surviving the first experience changes his probability that the box used in the first experience will explode the next time it is used from .5 to .333. Actually, I am not sure of that number (because I am not sure the law of succession applies here—it has been a long time since I read my E.T. Jaynes) but I am sure that his probability changes from .5 to something less than .5. And my best friend can communicate that fact to me: “Richard,” he can say, “stick with the same box used in your first experience.” But his message has the same defect that my directly observing the behavior of the box has: namely, since I cannot survive the outcome that would have led him to increase his probability that the box will explode the next time it is used, I cannot update on the fact that his probability has decreased.
Students of E.T. Jaynes know that observer A’s probability of hypothesis H can differ from observer B’s probability: this happens when A has seen evidence for or against H that B has not seen yet. Well, here we have a case where A’s probability can differ from B’s even though A and B have seen the same sequence of evidence about H: namely, that happens when one of the observers could not have survived having observed a sequence of events (different from the sequence that actually happened) that the other observer could have survived.
TropicalFruit and I have taken this discussion private (in order to avoid flooding this comment section with discussion on a point only very distantly related to the OP.) However if you have any interest in the discussion, ask one of us for a copy. (We have both agreed to provide a copy to whoever asks.)
If I know nothing about the boxes except that they have the same a priori probability of exploding and killing me, then I am indifferent between the two black boxes.
It is not terribly difficult to craft counter-intuitive examples of the principle. I anticipated I would be presented with such examples (because this is not my first time discussing this topic), which is why in my original comment I wrote, “its counter-intuitiveness is not by itself a strong reason to disbelieve it,” and the rest of that paragraph.
Okay but I just don’t agree.
Let each black box have some probability to kill you, uniformly chosen from a set of possible probabilities. Let’s start with a simple one: that probability is 0 or 1.
The a prior chance to kill you is .5.
After the box doesn’t kill you, you update, and now the chance is 0.
What about if we use a uniform distribution from [0,1)? Some boxes are .3 to kill you, others .78.
Far more of the experiences of not dying are from the low p-kill boxes than from the high p-kill ones. When people select the same box, instead of a new one, after not being killed, that brings the average kill rate of selected boxes down. Run this experiment for long enough, and the only boxes still being selected are the extremely low p-kill boxes that haven’t killed all their subjects yet.
This time, could you make a stronger objection, that’s more directly addressed at my counter-example?
In your new scenario, if I understand correctly, you have postulated that one box always explodes and one never explodes; I must undergo 2 experiences: the first experience is with one of the boxes, picked at random; then I get to choose whether my second experience is with the same box or whether it is with the other box. But I don’t need to know the outcome of the first experience to know that I want to limit my exposure to just one of these dangerous boxes: I will always choose to undergo the second experience with the same box as I underwent the first one with. Note that I arrived at this choice without doing the thing that I have been warning people not to do, namely, to update on observation X when I know it would have been impossible for me to survive (or more precisely for my rationality, my ability to have and to refine a model of reality, to survive) the observation not X.
That takes care of the first of your two new scenarios. In your second new scenario, I have a .5 chance of dying during my first experience. Then I may choose whether my second experience is with the same box or a new one. Before I make my choice, I would dearly love to experiment with either box in a setting in which I could survive the box’s exploding. But by your postulate as I understand it, that is not possible, so I am indifferent about which box I have my second experience with: either way I choose, my probability that I will die during the second experience is .5.
Note the in your previous comment, in which there was some P such each time a box is used, it has a probability P of exploding, there is no benefit to my being able to experiment with a box in a setting in which I could survive an explosion, but in the scenario we are considering now there is a huge benefit.
Suppose my best friend is observing the scenario from a safe distance: he can see what is happening, but is protected from any exploding box. My surviving the first experience changes his probability that the box used in the first experience will explode the next time it is used from .5 to .333. Actually, I am not sure of that number (because I am not sure the law of succession applies here—it has been a long time since I read my E.T. Jaynes) but I am sure that his probability changes from .5 to something less than .5. And my best friend can communicate that fact to me: “Richard,” he can say, “stick with the same box used in your first experience.” But his message has the same defect that my directly observing the behavior of the box has: namely, since I cannot survive the outcome that would have led him to increase his probability that the box will explode the next time it is used, I cannot update on the fact that his probability has decreased.
Students of E.T. Jaynes know that observer A’s probability of hypothesis H can differ from observer B’s probability: this happens when A has seen evidence for or against H that B has not seen yet. Well, here we have a case where A’s probability can differ from B’s even though A and B have seen the same sequence of evidence about H: namely, that happens when one of the observers could not have survived having observed a sequence of events (different from the sequence that actually happened) that the other observer could have survived.
TropicalFruit and I have taken this discussion private (in order to avoid flooding this comment section with discussion on a point only very distantly related to the OP.) However if you have any interest in the discussion, ask one of us for a copy. (We have both agreed to provide a copy to whoever asks.)
I would like a copy of the discussion.