If Omega asking if the bead could be striped changes you probability estimates, then you were either wrong before or wrong after (or likely both)
If omega tells you at the outset that the beads are all solid colors, then you should maintain your zero estimate that any are striped. If not, then you never should have had a zero estimate. He’s not giving you new information, he’s highlighting information you already had (or didn’t have.)
I don’t see any way to establish a reliable (non-anthropomorphic) chain of causality that connects there being red beads in the jars with Omega asking about red beads. He can ask about beads that aren’t there, and that couldn’t be there given the information he’s given you.
When Omega offered to save x+1 billion people if the earth was less than 1 million years old, I don’t think anyone argued that his suggesting it should change our estimates.
I don’t see any way to establish a reliable (non-anthropomorphic) chain of causality that connects there being red beads in the jars with Omega asking about red beads.
If I initially divide the state space into solid colours, and then Omega asks if the bead could be striped, then I would say that’s a form of new information—specifically, information that my initial assumption about the nature of the state space was wrong. (It’s not information I can update on; I have to retrospectively change my priors.)
If Omega asking if the bead could be striped changes you probability estimates, then you were either wrong before or wrong after (or likely both)
If omega tells you at the outset that the beads are all solid colors, then you should maintain your zero estimate that any are striped. If not, then you never should have had a zero estimate. He’s not giving you new information, he’s highlighting information you already had (or didn’t have.)
I don’t see any way to establish a reliable (non-anthropomorphic) chain of causality that connects there being red beads in the jars with Omega asking about red beads. He can ask about beads that aren’t there, and that couldn’t be there given the information he’s given you. When Omega offered to save x+1 billion people if the earth was less than 1 million years old, I don’t think anyone argued that his suggesting it should change our estimates.
There’s no need to, because probability is in the mind.
If you’re going to update based on what omega asks you then you must believe there is a connection that you have some information about.
If we don’t know anything about omega’s thought process or goals, then his questions tell us nothing.
I think our only disagreement is semantic.
If I initially divide the state space into solid colours, and then Omega asks if the bead could be striped, then I would say that’s a form of new information—specifically, information that my initial assumption about the nature of the state space was wrong. (It’s not information I can update on; I have to retrospectively change my priors.)
Apologies for the pointless diversion.
An ideal model of the real world must allow any miracle to happen, nothing should be logically prohibited.