Every question asked is adding more information. If Omega asks about green beads all three answers get shifted to 1⁄3.
I don’t think we should treat omega as adding (much) new information with each question.
Omega is super intelligent, we should assume that he’s already went all the way down the rabbit hole of possible colors, including ones that our brains could process but our eyes don’t see. We’re not inferring anything about his state of mind because he’s only asking questions about red, green, and blue. A sequence of lilac turquoise turquoise lilac lilac says very much more about what’s in the jar than the two hundred color questions omega asked you beforehand.
Not every question Omega could ask would provide new information, but some certainly do. Suppose his follow-up questions were “What is the probability that the bead is transparent?”, “What is the probability that the bead is made of wood?” and “What is the probability that the bead is striped?”. It is very likely that your original probability distribution over colors implicitly set at least one of these answers to zero, but the fact that Omega has mentioned it as a possibility makes it considerably more likely.
If Omega asking if the bead could be striped changes you probability estimates, then you were either wrong before or wrong after (or likely both)
If omega tells you at the outset that the beads are all solid colors, then you should maintain your zero estimate that any are striped. If not, then you never should have had a zero estimate. He’s not giving you new information, he’s highlighting information you already had (or didn’t have.)
I don’t see any way to establish a reliable (non-anthropomorphic) chain of causality that connects there being red beads in the jars with Omega asking about red beads. He can ask about beads that aren’t there, and that couldn’t be there given the information he’s given you.
When Omega offered to save x+1 billion people if the earth was less than 1 million years old, I don’t think anyone argued that his suggesting it should change our estimates.
I don’t see any way to establish a reliable (non-anthropomorphic) chain of causality that connects there being red beads in the jars with Omega asking about red beads.
If I initially divide the state space into solid colours, and then Omega asks if the bead could be striped, then I would say that’s a form of new information—specifically, information that my initial assumption about the nature of the state space was wrong. (It’s not information I can update on; I have to retrospectively change my priors.)
Of note, I was operating under a bad assumption with regards to the original example. I assumed that the set was a finite but unknown set of colors or an infinite set of colors. In the former case, every question is giving a little information about the possible set. In the latter it really does not matter much.
A sequence of lilac turquoise turquoise lilac lilac says very much more about what’s in the jar than the two hundred color questions omega asked you beforehand.
Yes, this is true. Personally, I am still curious about what to do with the two hundred color questions.
Every question asked is adding more information. If Omega asks about green beads all three answers get shifted to 1⁄3.
I don’t think we should treat omega as adding (much) new information with each question. Omega is super intelligent, we should assume that he’s already went all the way down the rabbit hole of possible colors, including ones that our brains could process but our eyes don’t see. We’re not inferring anything about his state of mind because he’s only asking questions about red, green, and blue. A sequence of lilac turquoise turquoise lilac lilac says very much more about what’s in the jar than the two hundred color questions omega asked you beforehand.
Not every question Omega could ask would provide new information, but some certainly do. Suppose his follow-up questions were “What is the probability that the bead is transparent?”, “What is the probability that the bead is made of wood?” and “What is the probability that the bead is striped?”. It is very likely that your original probability distribution over colors implicitly set at least one of these answers to zero, but the fact that Omega has mentioned it as a possibility makes it considerably more likely.
If Omega asking if the bead could be striped changes you probability estimates, then you were either wrong before or wrong after (or likely both)
If omega tells you at the outset that the beads are all solid colors, then you should maintain your zero estimate that any are striped. If not, then you never should have had a zero estimate. He’s not giving you new information, he’s highlighting information you already had (or didn’t have.)
I don’t see any way to establish a reliable (non-anthropomorphic) chain of causality that connects there being red beads in the jars with Omega asking about red beads. He can ask about beads that aren’t there, and that couldn’t be there given the information he’s given you. When Omega offered to save x+1 billion people if the earth was less than 1 million years old, I don’t think anyone argued that his suggesting it should change our estimates.
There’s no need to, because probability is in the mind.
If you’re going to update based on what omega asks you then you must believe there is a connection that you have some information about.
If we don’t know anything about omega’s thought process or goals, then his questions tell us nothing.
I think our only disagreement is semantic.
If I initially divide the state space into solid colours, and then Omega asks if the bead could be striped, then I would say that’s a form of new information—specifically, information that my initial assumption about the nature of the state space was wrong. (It’s not information I can update on; I have to retrospectively change my priors.)
Apologies for the pointless diversion.
An ideal model of the real world must allow any miracle to happen, nothing should be logically prohibited.
Of note, I was operating under a bad assumption with regards to the original example. I assumed that the set was a finite but unknown set of colors or an infinite set of colors. In the former case, every question is giving a little information about the possible set. In the latter it really does not matter much.
Yes, this is true. Personally, I am still curious about what to do with the two hundred color questions.