I’m at a local convenient store. A thief routinely robs me. He points a gun at me, threatens me, but never shoots, even when I push back a little. At this point, it’s kind of like we both know what’s happening, even though, technically, there’s a chance of physical danger.
Had this guy shot me, I wouldn’t be alive to reason about his next visit.
Now consider a different thief comes in, also armed. What is my probability of getting shot, as compared with the first thief?
Much, much, higher with the second thief. My past experiences with the first thief act as evidence towards the update that I’m less likely to be shot. With this new thief, I don’t have that evidence, so my probability of being shot is just the based rate based on my read of the situation.
I believe updating on the non-fatal encounters with the first thief is correct, and it seems to me analogous to updating on the sun not having exploded. Thoughts?
Because a person has a significant chance of surviving a bullet wound—or more relevantly, of surviving an assault with a gun—your not having been assaulted by the first thief is evidence that you will not be assaulted in future encounters with him, but it is weaker evidence than it would be if you could be certain of your ability to survive (and your ability to retain your rationality skills and memories after) every encounter with him.
Humans are very good at reading the “motivational states” of the other people in the room with them. If for example the thief’s eyes are glassy and he looks like he is staring at something far away even though you know it is unlikely there there is anything of interest in his visual field far away, well that is a sign he is in a dissociated state, which makes it more likely he’ll do something unpredictable and maybe violent. If when he looks at you he seems to look right through you, that is a sign of a coldness that also makes it more likely he will be violent if he can thereby benefit himself personally by doing so. So, what is actually doing most of the work of lowering your probability about the danger to you posed the the first thief? The mere fact that you escaped all the previous encounters without having been assaulted or your observations of his body language, tone of voice and other details that give clues about his personality and his mental state?
If I know nothing about the boxes except that they have the same a priori probability of exploding and killing me, then I am indifferent between the two black boxes.
It is not terribly difficult to craft counter-intuitive examples of the principle. I anticipated I would be presented with such examples (because this is not my first time discussing this topic), which is why in my original comment I wrote, “its counter-intuitiveness is not by itself a strong reason to disbelieve it,” and the rest of that paragraph.
Let each black box have some probability to kill you, uniformly chosen from a set of possible probabilities. Let’s start with a simple one: that probability is 0 or 1.
The a prior chance to kill you is .5.
After the box doesn’t kill you, you update, and now the chance is 0.
What about if we use a uniform distribution from [0,1)? Some boxes are .3 to kill you, others .78.
Far more of the experiences of not dying are from the low p-kill boxes than from the high p-kill ones. When people select the same box, instead of a new one, after not being killed, that brings the average kill rate of selected boxes down. Run this experiment for long enough, and the only boxes still being selected are the extremely low p-kill boxes that haven’t killed all their subjects yet.
This time, could you make a stronger objection, that’s more directly addressed at my counter-example?
In your new scenario, if I understand correctly, you have postulated that one box always explodes and one never explodes; I must undergo 2 experiences: the first experience is with one of the boxes, picked at random; then I get to choose whether my second experience is with the same box or whether it is with the other box. But I don’t need to know the outcome of the first experience to know that I want to limit my exposure to just one of these dangerous boxes: I will always choose to undergo the second experience with the same box as I underwent the first one with. Note that I arrived at this choice without doing the thing that I have been warning people not to do, namely, to update on observation X when I know it would have been impossible for me to survive (or more precisely for my rationality, my ability to have and to refine a model of reality, to survive) the observation not X.
That takes care of the first of your two new scenarios. In your second new scenario, I have a .5 chance of dying during my first experience. Then I may choose whether my second experience is with the same box or a new one. Before I make my choice, I would dearly love to experiment with either box in a setting in which I could survive the box’s exploding. But by your postulate as I understand it, that is not possible, so I am indifferent about which box I have my second experience with: either way I choose, my probability that I will die during the second experience is .5.
Note the in your previous comment, in which there was some P such each time a box is used, it has a probability P of exploding, there is no benefit to my being able to experiment with a box in a setting in which I could survive an explosion, but in the scenario we are considering now there is a huge benefit.
Suppose my best friend is observing the scenario from a safe distance: he can see what is happening, but is protected from any exploding box. My surviving the first experience changes his probability that the box used in the first experience will explode the next time it is used from .5 to .333. Actually, I am not sure of that number (because I am not sure the law of succession applies here—it has been a long time since I read my E.T. Jaynes) but I am sure that his probability changes from .5 to something less than .5. And my best friend can communicate that fact to me: “Richard,” he can say, “stick with the same box used in your first experience.” But his message has the same defect that my directly observing the behavior of the box has: namely, since I cannot survive the outcome that would have led him to increase his probability that the box will explode the next time it is used, I cannot update on the fact that his probability has decreased.
Students of E.T. Jaynes know that observer A’s probability of hypothesis H can differ from observer B’s probability: this happens when A has seen evidence for or against H that B has not seen yet. Well, here we have a case where A’s probability can differ from B’s even though A and B have seen the same sequence of evidence about H: namely, that happens when one of the observers could not have survived having observed a sequence of events (different from the sequence that actually happened) that the other observer could have survived.
TropicalFruit and I have taken this discussion private (in order to avoid flooding this comment section with discussion on a point only very distantly related to the OP.) However if you have any interest in the discussion, ask one of us for a copy. (We have both agreed to provide a copy to whoever asks.)
Counterpoint:
I’m at a local convenient store. A thief routinely robs me. He points a gun at me, threatens me, but never shoots, even when I push back a little. At this point, it’s kind of like we both know what’s happening, even though, technically, there’s a chance of physical danger.
Had this guy shot me, I wouldn’t be alive to reason about his next visit.
Now consider a different thief comes in, also armed. What is my probability of getting shot, as compared with the first thief?
Much, much, higher with the second thief. My past experiences with the first thief act as evidence towards the update that I’m less likely to be shot. With this new thief, I don’t have that evidence, so my probability of being shot is just the based rate based on my read of the situation.
I believe updating on the non-fatal encounters with the first thief is correct, and it seems to me analogous to updating on the sun not having exploded. Thoughts?
Because a person has a significant chance of surviving a bullet wound—or more relevantly, of surviving an assault with a gun—your not having been assaulted by the first thief is evidence that you will not be assaulted in future encounters with him, but it is weaker evidence than it would be if you could be certain of your ability to survive (and your ability to retain your rationality skills and memories after) every encounter with him.
Humans are very good at reading the “motivational states” of the other people in the room with them. If for example the thief’s eyes are glassy and he looks like he is staring at something far away even though you know it is unlikely there there is anything of interest in his visual field far away, well that is a sign he is in a dissociated state, which makes it more likely he’ll do something unpredictable and maybe violent. If when he looks at you he seems to look right through you, that is a sign of a coldness that also makes it more likely he will be violent if he can thereby benefit himself personally by doing so. So, what is actually doing most of the work of lowering your probability about the danger to you posed the the first thief? The mere fact that you escaped all the previous encounters without having been assaulted or your observations of his body language, tone of voice and other details that give clues about his personality and his mental state?
Replace thief with a black box that either explodes and kills you, or doesn’t. It has some chance to kill you, but you don’t know what that chance is.
I was put in a room with black-box-one 5 times. Each time it didn’t explode.
Now, I have a choice: I can go back in the room with black-box-one, or I can go to a room with black-box-two.
I’ll take black-box-one, based on prior evidence.
If I know nothing about the boxes except that they have the same a priori probability of exploding and killing me, then I am indifferent between the two black boxes.
It is not terribly difficult to craft counter-intuitive examples of the principle. I anticipated I would be presented with such examples (because this is not my first time discussing this topic), which is why in my original comment I wrote, “its counter-intuitiveness is not by itself a strong reason to disbelieve it,” and the rest of that paragraph.
Okay but I just don’t agree.
Let each black box have some probability to kill you, uniformly chosen from a set of possible probabilities. Let’s start with a simple one: that probability is 0 or 1.
The a prior chance to kill you is .5.
After the box doesn’t kill you, you update, and now the chance is 0.
What about if we use a uniform distribution from [0,1)? Some boxes are .3 to kill you, others .78.
Far more of the experiences of not dying are from the low p-kill boxes than from the high p-kill ones. When people select the same box, instead of a new one, after not being killed, that brings the average kill rate of selected boxes down. Run this experiment for long enough, and the only boxes still being selected are the extremely low p-kill boxes that haven’t killed all their subjects yet.
This time, could you make a stronger objection, that’s more directly addressed at my counter-example?
In your new scenario, if I understand correctly, you have postulated that one box always explodes and one never explodes; I must undergo 2 experiences: the first experience is with one of the boxes, picked at random; then I get to choose whether my second experience is with the same box or whether it is with the other box. But I don’t need to know the outcome of the first experience to know that I want to limit my exposure to just one of these dangerous boxes: I will always choose to undergo the second experience with the same box as I underwent the first one with. Note that I arrived at this choice without doing the thing that I have been warning people not to do, namely, to update on observation X when I know it would have been impossible for me to survive (or more precisely for my rationality, my ability to have and to refine a model of reality, to survive) the observation not X.
That takes care of the first of your two new scenarios. In your second new scenario, I have a .5 chance of dying during my first experience. Then I may choose whether my second experience is with the same box or a new one. Before I make my choice, I would dearly love to experiment with either box in a setting in which I could survive the box’s exploding. But by your postulate as I understand it, that is not possible, so I am indifferent about which box I have my second experience with: either way I choose, my probability that I will die during the second experience is .5.
Note the in your previous comment, in which there was some P such each time a box is used, it has a probability P of exploding, there is no benefit to my being able to experiment with a box in a setting in which I could survive an explosion, but in the scenario we are considering now there is a huge benefit.
Suppose my best friend is observing the scenario from a safe distance: he can see what is happening, but is protected from any exploding box. My surviving the first experience changes his probability that the box used in the first experience will explode the next time it is used from .5 to .333. Actually, I am not sure of that number (because I am not sure the law of succession applies here—it has been a long time since I read my E.T. Jaynes) but I am sure that his probability changes from .5 to something less than .5. And my best friend can communicate that fact to me: “Richard,” he can say, “stick with the same box used in your first experience.” But his message has the same defect that my directly observing the behavior of the box has: namely, since I cannot survive the outcome that would have led him to increase his probability that the box will explode the next time it is used, I cannot update on the fact that his probability has decreased.
Students of E.T. Jaynes know that observer A’s probability of hypothesis H can differ from observer B’s probability: this happens when A has seen evidence for or against H that B has not seen yet. Well, here we have a case where A’s probability can differ from B’s even though A and B have seen the same sequence of evidence about H: namely, that happens when one of the observers could not have survived having observed a sequence of events (different from the sequence that actually happened) that the other observer could have survived.
TropicalFruit and I have taken this discussion private (in order to avoid flooding this comment section with discussion on a point only very distantly related to the OP.) However if you have any interest in the discussion, ask one of us for a copy. (We have both agreed to provide a copy to whoever asks.)
I would like a copy of the discussion.