Assume that Jar S contains just silver balls, whereas Jar R contains ninety percent silver balls and ten percent red balls.
Someone secretly and randomly picks a jar, with an equal chance of choosing either. This picker then takes N randomly selected balls from his chosen jar with replacement. If a ball is silver he keeps silent, whereas if a ball is red he says “red.”
You hear nothing. You make the straightforward calculation using Bayes’ rule to determine the new probability that the picker was drawing from Jar S.
But then you learn something. The red balls are bombs and if one had been picked it would have instantly exploded and killed you. Should learning that red balls are bombs influence your estimate of the probability that the picker was drawing from Jar S?
I’m currently writing a paper on how the Fermi paradox should cause us to update our beliefs about optimal existential risk strategies. This hypothetical is attempting to get at whether it matters if we assume that aliens would spread at the speed of light killing everything in their path.
I had a conversation with another person regarding this Leslie’s firing squad type stuff. Basically, I came up with a cavemen analogy with the cavemen facing lethal threats. It’s pretty clear—from the outside—that the cavemen which do probability correctly and don’t do anthropic reasoning with regards to tigers in the field, will do better at mapping lethal dangers in their environment.
You’re welcome. So what’s your actual take on the issue? I never seen a coherent explanation why bombs must make a difference. I seen appeals to “but you wouldn’t be thinking anything if it was red”, which ought to perfectly cancel out if you apply that to the urn choice as well.
edit: i.e. this anthropics, to me, is sort of like how you could calculate the forces in a mechanical system, but make an error somewhere, and that yields an apparent perpetuum mobile, as forces on your wheel with water and magnets fail to cancel out. Likewise, you evaluate impacts of some irrelevant information, and you make an error somewhere, and irrelevant information makes a difference.
To a first approximation I don’t think it makes a difference, but it does add some logical uncertainty. Also, intuitively I want to be able to use anthropic reasoning to say “there is only a tiny chance that the universe would have condition X, but I’m not surprised by X because without X observers such as us won’t exist”, but I think doing this implies I have to give a different estimate if red = bomb.
Also, intuitively I want to be able to use anthropic reasoning to say “there is only a tiny chance that the universe would have condition X, but I’m not surprised by X because without X observers such as us won’t exist”
Hmm, that’s an interesting angle on the issue, I didn’t quite realize that was the motivation here.
I would be surprised by our existence if that was the case, and not further surprised by observation of X (because I already observed X by the way of perceiving my existence).
Let’s say I remember that there was an strange, surprising sign painted on the wall, and I go by the wall, and I see that sign, and I am surprised that there’s that sign on the wall at all, but I am not surprised that I am seeing it (because I can perform an operation in my head that implies existence of the sign—my memory tells me I seen it before). Same with the existence, I am surprised we exist at all but I am not surprised when I observe something necessary for my existence because I could’ve derived it from prior observations.
I think this particular example doesn’t really exemplify what I think you’re trying to demonstrate here.
A simpler example would be:
You draw one ball our of a jar containing 99% red balls and 1% silver balls (randomly mixed).
The ball is silver. Is this surprising? Yes.
What if you instead draw a ball in a dark room so you can’t see the color of the ball (same probability distribution). After drawing the ball, you are informed that the red balls contain a high explosive, and if you draw a red ball from the jar it would instantly explode, killing you.
The lights go on. You see that you’re holding a silver ball. Does this surprise you?
Well, being alive would surprise me, but not the colour of the ball. Essentially what happens is that the internal senses (e.g. perceiving own internal monologue) end up sensing the ball colour (by the way of the high explosive).
Now that I see this problem again, my thoughts on it are slightly different.
In the version with no bombs, there’s a possible scenario where the picker draws a red ball but lies to you by keeping silent. So, there’s a viable way for “you hear nothing” AND “Jar R” to happen.
But in the version with bombs, the scenario with “you are alive” AND “Jar R” can never happen. So, being alive in the with-bomb version is stronger evidence for Jar S than hearing nothing in the no-bomb version.
Okay, sure. The picker could be lying or speaking quietly; the bomb could be malfunctioning or have a timer that hasn’t gone off yet. (Note to self: put down the ball as soon as you find out that it could be a bomb.) These things don’t seem like they should be the point of a thought experiment.
If the two jar scenarios start with equal anthropic measure (i.e. looking in from the outside), then you really are less likely to have jar R if you’re not dead.
Assume that Jar S contains just silver balls, whereas Jar R contains ninety percent silver balls and ten percent red balls.
Someone secretly and randomly picks a jar, with an equal chance of choosing either. This picker then takes N randomly selected balls from his chosen jar with replacement. If a ball is silver he keeps silent, whereas if a ball is red he says “red.”
You hear nothing. You make the straightforward calculation using Bayes’ rule to determine the new probability that the picker was drawing from Jar S.
But then you learn something. The red balls are bombs and if one had been picked it would have instantly exploded and killed you. Should learning that red balls are bombs influence your estimate of the probability that the picker was drawing from Jar S?
I’m currently writing a paper on how the Fermi paradox should cause us to update our beliefs about optimal existential risk strategies. This hypothetical is attempting to get at whether it matters if we assume that aliens would spread at the speed of light killing everything in their path.
I had a conversation with another person regarding this Leslie’s firing squad type stuff. Basically, I came up with a cavemen analogy with the cavemen facing lethal threats. It’s pretty clear—from the outside—that the cavemen which do probability correctly and don’t do anthropic reasoning with regards to tigers in the field, will do better at mapping lethal dangers in their environment.
Thanks for letting me know about “Leslie’s firing squad[s]”
You’re welcome. So what’s your actual take on the issue? I never seen a coherent explanation why bombs must make a difference. I seen appeals to “but you wouldn’t be thinking anything if it was red”, which ought to perfectly cancel out if you apply that to the urn choice as well.
edit: i.e. this anthropics, to me, is sort of like how you could calculate the forces in a mechanical system, but make an error somewhere, and that yields an apparent perpetuum mobile, as forces on your wheel with water and magnets fail to cancel out. Likewise, you evaluate impacts of some irrelevant information, and you make an error somewhere, and irrelevant information makes a difference.
To a first approximation I don’t think it makes a difference, but it does add some logical uncertainty. Also, intuitively I want to be able to use anthropic reasoning to say “there is only a tiny chance that the universe would have condition X, but I’m not surprised by X because without X observers such as us won’t exist”, but I think doing this implies I have to give a different estimate if red = bomb.
Hmm, that’s an interesting angle on the issue, I didn’t quite realize that was the motivation here.
I would be surprised by our existence if that was the case, and not further surprised by observation of X (because I already observed X by the way of perceiving my existence).
Let’s say I remember that there was an strange, surprising sign painted on the wall, and I go by the wall, and I see that sign, and I am surprised that there’s that sign on the wall at all, but I am not surprised that I am seeing it (because I can perform an operation in my head that implies existence of the sign—my memory tells me I seen it before). Same with the existence, I am surprised we exist at all but I am not surprised when I observe something necessary for my existence because I could’ve derived it from prior observations.
I think this particular example doesn’t really exemplify what I think you’re trying to demonstrate here.
A simpler example would be:
You draw one ball our of a jar containing 99% red balls and 1% silver balls (randomly mixed).
The ball is silver. Is this surprising? Yes.
What if you instead draw a ball in a dark room so you can’t see the color of the ball (same probability distribution). After drawing the ball, you are informed that the red balls contain a high explosive, and if you draw a red ball from the jar it would instantly explode, killing you.
The lights go on. You see that you’re holding a silver ball. Does this surprise you?
Well, being alive would surprise me, but not the colour of the ball. Essentially what happens is that the internal senses (e.g. perceiving own internal monologue) end up sensing the ball colour (by the way of the high explosive).
This is related to the Sleeping Beauty Problem, and in general the answer depends what you’re trying to do with “probability”. For lots and lots more, Bostrom’s PhD thesis is very detailed: Anthropic Bias: Observation Selection Effects in Science and Philosophy.
Bostrom’s Observation Selection Effects and Human Extinction Risks paper is less philosophical and sounds like it’s more relvant to the paper you’re working on.
Before I actually do the math, “you hear nothing” appears to affect my estimate exactly in the same way as “you’re still alive.”
This seems like the obvious answer to me as well. What am I missing?
Now that I see this problem again, my thoughts on it are slightly different.
In the version with no bombs, there’s a possible scenario where the picker draws a red ball but lies to you by keeping silent. So, there’s a viable way for “you hear nothing” AND “Jar R” to happen.
But in the version with bombs, the scenario with “you are alive” AND “Jar R” can never happen. So, being alive in the with-bomb version is stronger evidence for Jar S than hearing nothing in the no-bomb version.
Okay, sure. The picker could be lying or speaking quietly; the bomb could be malfunctioning or have a timer that hasn’t gone off yet. (Note to self: put down the ball as soon as you find out that it could be a bomb.) These things don’t seem like they should be the point of a thought experiment.
A side note: under the cherry bomb scenario the probability of you hearing the word “red” is zero.
If the two jar scenarios start with equal anthropic measure (i.e. looking in from the outside), then you really are less likely to have jar R if you’re not dead.