The estimate should take into account the expectation of being asked further questions.
I do not know how related this is to your comment, but it made me think of another response to the Dutch book objection. (Am I using that term correctly?)
If Omega asks me about a red bead I can say 100%. If he then asks about a blue bead I can adjust my original estimate so that both red and blue are the same at 50⁄50. Every question asked is adding more information. If Omega asks about green beads all three answers get shifted to 1⁄3.
This translates into an example with numbered balls just fine. The more colors or numbers Omega asks about decreases the expected probability that any particular one of them will come out of the jar simply because the known space of colors and numbers is growing. Until Omega acknowledges that there could be a bead of that color or number there is no particular reason to assume that such a bead exists.
If the example was rewritten to simply say any type of object could be in the jar, this still makes sense. If Omega asks about a red bead, we say 100%. If Omega asks about a blue chair, both become 50%. The restriction of colors and numbers is our assumed knowledge and has nothing to do with the problem at hand. We can meta-game all we want, but it has nothing to do with what could be in the jar.
The state of the initial problem is this:
A red bead could be in the jar
After the second question:
A red bead could be in the jar
A green bead could be in the jar
I suppose it makes some sense to include an “other” category, but there is no knowledge of anything other than red and green beads. The question of probability implies that another may exist, but is that enough to assign it a probability?
Every question asked is adding more information. If Omega asks about green beads all three answers get shifted to 1⁄3.
I don’t think we should treat omega as adding (much) new information with each question.
Omega is super intelligent, we should assume that he’s already went all the way down the rabbit hole of possible colors, including ones that our brains could process but our eyes don’t see. We’re not inferring anything about his state of mind because he’s only asking questions about red, green, and blue. A sequence of lilac turquoise turquoise lilac lilac says very much more about what’s in the jar than the two hundred color questions omega asked you beforehand.
Not every question Omega could ask would provide new information, but some certainly do. Suppose his follow-up questions were “What is the probability that the bead is transparent?”, “What is the probability that the bead is made of wood?” and “What is the probability that the bead is striped?”. It is very likely that your original probability distribution over colors implicitly set at least one of these answers to zero, but the fact that Omega has mentioned it as a possibility makes it considerably more likely.
If Omega asking if the bead could be striped changes you probability estimates, then you were either wrong before or wrong after (or likely both)
If omega tells you at the outset that the beads are all solid colors, then you should maintain your zero estimate that any are striped. If not, then you never should have had a zero estimate. He’s not giving you new information, he’s highlighting information you already had (or didn’t have.)
I don’t see any way to establish a reliable (non-anthropomorphic) chain of causality that connects there being red beads in the jars with Omega asking about red beads. He can ask about beads that aren’t there, and that couldn’t be there given the information he’s given you.
When Omega offered to save x+1 billion people if the earth was less than 1 million years old, I don’t think anyone argued that his suggesting it should change our estimates.
I don’t see any way to establish a reliable (non-anthropomorphic) chain of causality that connects there being red beads in the jars with Omega asking about red beads.
If I initially divide the state space into solid colours, and then Omega asks if the bead could be striped, then I would say that’s a form of new information—specifically, information that my initial assumption about the nature of the state space was wrong. (It’s not information I can update on; I have to retrospectively change my priors.)
Of note, I was operating under a bad assumption with regards to the original example. I assumed that the set was a finite but unknown set of colors or an infinite set of colors. In the former case, every question is giving a little information about the possible set. In the latter it really does not matter much.
A sequence of lilac turquoise turquoise lilac lilac says very much more about what’s in the jar than the two hundred color questions omega asked you beforehand.
Yes, this is true. Personally, I am still curious about what to do with the two hundred color questions.
Don’t think of probability as being mutable, as getting updated. Instead, consider a fixed comprehensive state space, that has a place on it for every possible future behavior, including the possible questions asked, possible pieces of evidence presented, possible actions you make. Assign a fixed probability measure to this state space.
Now, when you do observe something, this is information, an event, a subset on the global state space. This event selects an area on it, and encompasses some of the probability mass. The statements, or beliefs (such as “the ball #2 will be red”), that you update on this info, are probabilistic variables. A probabilistic variable is a function that maps the state space on a simpler domain, for example a binary discrete probabilistic variable is basically an event, a subset of the state space (that is, in some states, the ball #2 is indeed defined to be red, these states belong to the event of ball #2 being red).
Your info about the world retains only the part of the state space, and within that part of the state space, some portion of the probability mass goes to the event defining your statement, and some portion remains outside of it. The “updating” only happens when you focus on this info, as opposed to the whole state space.
If that picture is clear, you can try to step back to consider what kind of probability measure you’d assign to your state space, when its structure already encodes all possible future observations. If you are indifferent to a model, the assignment is going to be some kind of division into equal parts, according to the structure of state space.
IAWYC, but as pedagogy it’s about on the level of “How should you imagine a 7-dimensional torus? Just imagine an n-dimensional torus and let n go to 7.”
What if Omega wants you to commit to a bet based on your probabilities at every step?
Or what if he just straight up asks you what color you want to guess the bead will be, without asking about any individual colors? (Then you’d probably be best served by switching to a language with fewer basic color words, but that aside...)
What if Omega wants you to commit to a bet at every step?
Than you are forced to bid 0 because you have to account for any further questions, which sounds similar to what Vladimir_Nesov said.
By the way, I think adding another restriction to your example to force it back into your specific response is not particularly meaningful. In the case where you do not have to commit to a bet at every step, does what I say make sense? If so, than what Vladimir_Nesov suggested seems to be on the right path with regards to your restrictions.
Or what if he just straight up asks you what color you want to guess the bead will be, without asking about any individual colors? (Then you’d probably be best served by switching to a language with fewer basic color words, but that aside...)
Switching languages is a semantic trick. If we are allowed to use any words to describe the bead we can just say “not-clear” because the space of “not-clear” covers what we generally mean by “color”. We may as well say “the bead will be a colored bead.” All of this breaks the assumed principle of no information.
If Omega wanted a particular color and forced us into actually answering the annoying question, we are completely off the path of probabilities and it does not matter what you answer as long as you picked a color. If Omega then asked us what the probability of that particular color coming out of the jar would be, the answer should be the same as if you picked any other color. This drops to zero unless you self-restrict by the number of colors you can personally remember.
MrHen, whatever strategy you’re employing here, it doesn’t sound like a strategy for arriving at the really truly correct answer, but some sort of clever set of verbal responses with a different purpose entirely. In real life, just because Omega asked if the bead is red simply does not mean there is probability 0 of it being green.
MrHen, whatever strategy you’re employing here, it doesn’t sound like a strategy for arriving at the really truly correct answer, but some sort of clever set of verbal responses with a different purpose entirely.
Mmm… I was not trying to employ a strategy with clever verbal responses. I thought I was arguing against that, actually, so I must be far from where I think I am.
I feel like I am trying to answer a completely different question than the one originally asked. Is the question:
Knowing nothing about what is in the jar except that its contents are divided by color as per our definition of “color”, what is the probability of a red bead being pulled?
Knowing nothing about what is in the jar, what is the probability of a red bead being pulled?
I admittedly assumed the latter even though the article used words closer to the former. Perhaps this was my mistake?
In real life, just because Omega asked if the bead is red simply does not mean there is probability 0 of it being green.
I would agree. I do think that Omega asking about a red bead implies nothing about the probability of it being green. What I am currently wondering is if the question implies anything about the probability of the bead being red. If Omega acknowledges that the bead could be red, does that give red a higher probability than green?
I suppose I instinctively would answer affirmatively. The reasoning is that “red” is now included in the jar’s potential outcomes while green has not been acknowledged yet. In other words, green doesn’t even have a probability. Strictly speaking, this makes little sense, so I must be misstepping somewhere. My hunches are pointing toward my disallowing green into the potential outcomes.
This does not mean that I refuse to think of green as a color, but that green is not automatically included in the jar’s potential outcomes just because Omega used the word “color”. Is this the verbal cleverness you were referring to?
(Switching thoughts) In terms of arriving at the really truly correct answer, it seems that a strategy that gets closer as more beads is what is desired. If no beads are revealed, what sort of strategy is possible? I think the answer to this revolves around my potential confusion of the original question.
I apologize if I am mudding things up and am way off base.
I do not know how related this is to your comment, but it made me think of another response to the Dutch book objection. (Am I using that term correctly?)
If Omega asks me about a red bead I can say 100%. If he then asks about a blue bead I can adjust my original estimate so that both red and blue are the same at 50⁄50. Every question asked is adding more information. If Omega asks about green beads all three answers get shifted to 1⁄3.
This translates into an example with numbered balls just fine. The more colors or numbers Omega asks about decreases the expected probability that any particular one of them will come out of the jar simply because the known space of colors and numbers is growing. Until Omega acknowledges that there could be a bead of that color or number there is no particular reason to assume that such a bead exists.
If the example was rewritten to simply say any type of object could be in the jar, this still makes sense. If Omega asks about a red bead, we say 100%. If Omega asks about a blue chair, both become 50%. The restriction of colors and numbers is our assumed knowledge and has nothing to do with the problem at hand. We can meta-game all we want, but it has nothing to do with what could be in the jar.
The state of the initial problem is this:
A red bead could be in the jar
After the second question:
A red bead could be in the jar
A green bead could be in the jar
I suppose it makes some sense to include an “other” category, but there is no knowledge of anything other than red and green beads. The question of probability implies that another may exist, but is that enough to assign it a probability?
Every question asked is adding more information. If Omega asks about green beads all three answers get shifted to 1⁄3.
I don’t think we should treat omega as adding (much) new information with each question. Omega is super intelligent, we should assume that he’s already went all the way down the rabbit hole of possible colors, including ones that our brains could process but our eyes don’t see. We’re not inferring anything about his state of mind because he’s only asking questions about red, green, and blue. A sequence of lilac turquoise turquoise lilac lilac says very much more about what’s in the jar than the two hundred color questions omega asked you beforehand.
Not every question Omega could ask would provide new information, but some certainly do. Suppose his follow-up questions were “What is the probability that the bead is transparent?”, “What is the probability that the bead is made of wood?” and “What is the probability that the bead is striped?”. It is very likely that your original probability distribution over colors implicitly set at least one of these answers to zero, but the fact that Omega has mentioned it as a possibility makes it considerably more likely.
If Omega asking if the bead could be striped changes you probability estimates, then you were either wrong before or wrong after (or likely both)
If omega tells you at the outset that the beads are all solid colors, then you should maintain your zero estimate that any are striped. If not, then you never should have had a zero estimate. He’s not giving you new information, he’s highlighting information you already had (or didn’t have.)
I don’t see any way to establish a reliable (non-anthropomorphic) chain of causality that connects there being red beads in the jars with Omega asking about red beads. He can ask about beads that aren’t there, and that couldn’t be there given the information he’s given you. When Omega offered to save x+1 billion people if the earth was less than 1 million years old, I don’t think anyone argued that his suggesting it should change our estimates.
There’s no need to, because probability is in the mind.
If you’re going to update based on what omega asks you then you must believe there is a connection that you have some information about.
If we don’t know anything about omega’s thought process or goals, then his questions tell us nothing.
I think our only disagreement is semantic.
If I initially divide the state space into solid colours, and then Omega asks if the bead could be striped, then I would say that’s a form of new information—specifically, information that my initial assumption about the nature of the state space was wrong. (It’s not information I can update on; I have to retrospectively change my priors.)
Apologies for the pointless diversion.
An ideal model of the real world must allow any miracle to happen, nothing should be logically prohibited.
Of note, I was operating under a bad assumption with regards to the original example. I assumed that the set was a finite but unknown set of colors or an infinite set of colors. In the former case, every question is giving a little information about the possible set. In the latter it really does not matter much.
Yes, this is true. Personally, I am still curious about what to do with the two hundred color questions.
Don’t think of probability as being mutable, as getting updated. Instead, consider a fixed comprehensive state space, that has a place on it for every possible future behavior, including the possible questions asked, possible pieces of evidence presented, possible actions you make. Assign a fixed probability measure to this state space.
Now, when you do observe something, this is information, an event, a subset on the global state space. This event selects an area on it, and encompasses some of the probability mass. The statements, or beliefs (such as “the ball #2 will be red”), that you update on this info, are probabilistic variables. A probabilistic variable is a function that maps the state space on a simpler domain, for example a binary discrete probabilistic variable is basically an event, a subset of the state space (that is, in some states, the ball #2 is indeed defined to be red, these states belong to the event of ball #2 being red).
Your info about the world retains only the part of the state space, and within that part of the state space, some portion of the probability mass goes to the event defining your statement, and some portion remains outside of it. The “updating” only happens when you focus on this info, as opposed to the whole state space.
If that picture is clear, you can try to step back to consider what kind of probability measure you’d assign to your state space, when its structure already encodes all possible future observations. If you are indifferent to a model, the assignment is going to be some kind of division into equal parts, according to the structure of state space.
IAWYC, but as pedagogy it’s about on the level of “How should you imagine a 7-dimensional torus? Just imagine an n-dimensional torus and let n go to 7.”
Eliezer’s post on priors explains the same idea more accessibly.
EDIT: Sorry, I didn’t notice you already linked it below.
What if Omega wants you to commit to a bet based on your probabilities at every step?
Or what if he just straight up asks you what color you want to guess the bead will be, without asking about any individual colors? (Then you’d probably be best served by switching to a language with fewer basic color words, but that aside...)
Than you are forced to bid 0 because you have to account for any further questions, which sounds similar to what Vladimir_Nesov said.
By the way, I think adding another restriction to your example to force it back into your specific response is not particularly meaningful. In the case where you do not have to commit to a bet at every step, does what I say make sense? If so, than what Vladimir_Nesov suggested seems to be on the right path with regards to your restrictions.
Switching languages is a semantic trick. If we are allowed to use any words to describe the bead we can just say “not-clear” because the space of “not-clear” covers what we generally mean by “color”. We may as well say “the bead will be a colored bead.” All of this breaks the assumed principle of no information.
If Omega wanted a particular color and forced us into actually answering the annoying question, we are completely off the path of probabilities and it does not matter what you answer as long as you picked a color. If Omega then asked us what the probability of that particular color coming out of the jar would be, the answer should be the same as if you picked any other color. This drops to zero unless you self-restrict by the number of colors you can personally remember.
MrHen, whatever strategy you’re employing here, it doesn’t sound like a strategy for arriving at the really truly correct answer, but some sort of clever set of verbal responses with a different purpose entirely. In real life, just because Omega asked if the bead is red simply does not mean there is probability 0 of it being green.
Mmm… I was not trying to employ a strategy with clever verbal responses. I thought I was arguing against that, actually, so I must be far from where I think I am.
I feel like I am trying to answer a completely different question than the one originally asked. Is the question:
Knowing nothing about what is in the jar except that its contents are divided by color as per our definition of “color”, what is the probability of a red bead being pulled?
Knowing nothing about what is in the jar, what is the probability of a red bead being pulled?
I admittedly assumed the latter even though the article used words closer to the former. Perhaps this was my mistake?
I would agree. I do think that Omega asking about a red bead implies nothing about the probability of it being green. What I am currently wondering is if the question implies anything about the probability of the bead being red. If Omega acknowledges that the bead could be red, does that give red a higher probability than green?
I suppose I instinctively would answer affirmatively. The reasoning is that “red” is now included in the jar’s potential outcomes while green has not been acknowledged yet. In other words, green doesn’t even have a probability. Strictly speaking, this makes little sense, so I must be misstepping somewhere. My hunches are pointing toward my disallowing green into the potential outcomes.
This does not mean that I refuse to think of green as a color, but that green is not automatically included in the jar’s potential outcomes just because Omega used the word “color”. Is this the verbal cleverness you were referring to?
(Switching thoughts) In terms of arriving at the really truly correct answer, it seems that a strategy that gets closer as more beads is what is desired. If no beads are revealed, what sort of strategy is possible? I think the answer to this revolves around my potential confusion of the original question.
I apologize if I am mudding things up and am way off base.
Is Omega privileging the hypothesis that the bead is red?:-)