(Again, though, this is your prior before Omega says anything; you then have to update it as soon as ve speaks, given your prior on ver motivations in bringing up a particular color first. That part is trickier.)
How would you update given the following scenarios (this is assuming finite, fixed, known possible outcomes)?
Omega asks you for the probability of a red bead being chosen from the jar
Omega asks you for the probability of “any particular object” being chosen
Omega asks you to name an object from the set and then asks you for the probability of that object being chosen
I don’t think #2 or #3 give me any new relevant information, so I wouldn’t update. (Omega could be “messing with me” by incorporating my sense of salience of certain colors into the game, but this suspicion would be information for my prior, and I don’t think I learn anything new by being asked #3.)
I would incrementally increase my probability of red in case #1, and decrease the others evenly, but I can’t satisfy myself with the justification for this at the moment. The space of all minds is vast; and while it would make sense for several instrumental reasons to question first about a more common color, we’re assuming that Omega doesn’t need or want anything from this encounter.
In the real-life cases which this is meant to model, though, like having a psychologist doing a study in place of Omega, I can model their mind by mine and realize that there are more studies in which I’d ask about a color I know is likely to come up, than studies in which I’d pick a specific less-likely color, and so I should update p(red) positively.
How would you update given the following scenarios (this is assuming finite, fixed, known possible outcomes)?
Omega asks you for the probability of a red bead being chosen from the jar
Omega asks you for the probability of “any particular object” being chosen
Omega asks you to name an object from the set and then asks you for the probability of that object being chosen
I don’t think #2 or #3 give me any new relevant information, so I wouldn’t update. (Omega could be “messing with me” by incorporating my sense of salience of certain colors into the game, but this suspicion would be information for my prior, and I don’t think I learn anything new by being asked #3.)
I would incrementally increase my probability of red in case #1, and decrease the others evenly, but I can’t satisfy myself with the justification for this at the moment. The space of all minds is vast; and while it would make sense for several instrumental reasons to question first about a more common color, we’re assuming that Omega doesn’t need or want anything from this encounter.
In the real-life cases which this is meant to model, though, like having a psychologist doing a study in place of Omega, I can model their mind by mine and realize that there are more studies in which I’d ask about a color I know is likely to come up, than studies in which I’d pick a specific less-likely color, and so I should update p(red) positively.
But probably not all the way to 1⁄2.