Sorry, you haven’t convincingly demonstrated the wrongness of 50%. MrHen’s position seems to me quite natural and defensible, provided he picks a consistent prior. For example, I’d talk with Omega exactly as you described up to this point:
...Omega: What is the probability of the first bead being blue as opposed to non-blue?
Me: 25%.
You ask why 25%? My left foot said so… or maybe because Omega mentioned red first and blue second. C’mon Dutch book me.
I think that doing it this way assumes that Omega is deliberately screwing with you and will ask about colors in a way that is somehow germane to the likelihood. Assume he picked “red” to ask about first at random out of whatever colors the beads come in.
This new information gives me grounds to revise my estimates as Omega asks further questions, but I still don’t see how it demonstrates the wrongness of initially answering 50%.
The reason 50⁄50 is bad is because the beads in the jar come in no more than 12 colors and we have no reason to favor red over the other 11 colors.
Knowing there is a cap of 12 possible options, it makes intuitive sense to start by giving each color equal weights until more information appears. (Namely, whenever Omega starts pulling beads.)
I suppose the relevant question is now, “Does Omega mentioning red tell us anything about what is in the jar?” When we know the set of possible objects in the jar, it really tells us nothing new. If the set of possible objects is unknown, now we know red is a possibility and we can adjust accordingly.
The assumption here is that Omega is just randomly asking about something from the possible set of objects. Essentially, since Omega is admitting that red could be in the jar, we know red could be in the jar. In the 12 color scenario, we already know this. I do not think that Omega mentioning red should effect our guess.
All this arguing about priors eerily resembles scholastics, balancing angels on the head of a pin. Okay I get it, we read Omega’s Bible differently: unlike me, you see no symbolic significance in the mention of red. Riiiiight. Now how about an experiment?
All this arguing about priors eerily resembles scholastics, balancing angels on the head of a pin. Okay I get it, we read Omega’s Bible differently: unlike me, you see no symbolic significance in the mention of red. Riiiiight. Now how about an experiment?
Agreed. For what it is worth, I do see some significance in the mention of red, but cannot figure out why and do not see the significance in the 12 color example. This keeps setting off a red flag in my head because it seems inconsistent. Any help in figuring out why would be nifty.
In terms of an experiment, I would not bet at all if given the option. If I had to choose, I would choose whichever option costs less and right it off as a forced expense.
In English: If Omega said he had a dollar claiming the next bead would be red and asked me what I bet I would bet nothing. If I had to pick a non-zero number I would pick the smallest available.
I think Alicorn is operating under a strict “12 colors of beads” idea based on what a color is or is not. As best as I can tell, the problem is essentially, “Given a finite set of bead colors in a jar, what is the probability of getting any particular color from a hidden mixture of beads?” The trickiness is that each color could have a different amount in the jar, not that there are any number of colors.
Alicorn answered elsewhere that when the jar has an infinite set of possible options the probability of any particular option would be infinitesimal.
If the number of possible outcomes is finite, fixed and known, but no other information is given, then there’s a unique correct prior: the maxentropy prior that gives equal weight to each possibility.
(Again, though, this is your prior before Omega says anything; you then have to update it as soon as ve speaks, given your prior on ver motivations in bringing up a particular color first. That part is trickier.)
(Again, though, this is your prior before Omega says anything; you then have to update it as soon as ve speaks, given your prior on ver motivations in bringing up a particular color first. That part is trickier.)
How would you update given the following scenarios (this is assuming finite, fixed, known possible outcomes)?
Omega asks you for the probability of a red bead being chosen from the jar
Omega asks you for the probability of “any particular object” being chosen
Omega asks you to name an object from the set and then asks you for the probability of that object being chosen
I don’t think #2 or #3 give me any new relevant information, so I wouldn’t update. (Omega could be “messing with me” by incorporating my sense of salience of certain colors into the game, but this suspicion would be information for my prior, and I don’t think I learn anything new by being asked #3.)
I would incrementally increase my probability of red in case #1, and decrease the others evenly, but I can’t satisfy myself with the justification for this at the moment. The space of all minds is vast; and while it would make sense for several instrumental reasons to question first about a more common color, we’re assuming that Omega doesn’t need or want anything from this encounter.
In the real-life cases which this is meant to model, though, like having a psychologist doing a study in place of Omega, I can model their mind by mine and realize that there are more studies in which I’d ask about a color I know is likely to come up, than studies in which I’d pick a specific less-likely color, and so I should update p(red) positively.
Sorry, you haven’t convincingly demonstrated the wrongness of 50%. MrHen’s position seems to me quite natural and defensible, provided he picks a consistent prior. For example, I’d talk with Omega exactly as you described up to this point:
...Omega: What is the probability of the first bead being blue as opposed to non-blue?
Me: 25%.
You ask why 25%? My left foot said so… or maybe because Omega mentioned red first and blue second. C’mon Dutch book me.
I think that doing it this way assumes that Omega is deliberately screwing with you and will ask about colors in a way that is somehow germane to the likelihood. Assume he picked “red” to ask about first at random out of whatever colors the beads come in.
This new information gives me grounds to revise my estimates as Omega asks further questions, but I still don’t see how it demonstrates the wrongness of initially answering 50%.
The reason 50⁄50 is bad is because the beads in the jar come in no more than 12 colors and we have no reason to favor red over the other 11 colors.
Knowing there is a cap of 12 possible options, it makes intuitive sense to start by giving each color equal weights until more information appears. (Namely, whenever Omega starts pulling beads.)
We have a reason: Omega mentioned red.
I suppose the relevant question is now, “Does Omega mentioning red tell us anything about what is in the jar?” When we know the set of possible objects in the jar, it really tells us nothing new. If the set of possible objects is unknown, now we know red is a possibility and we can adjust accordingly.
The assumption here is that Omega is just randomly asking about something from the possible set of objects. Essentially, since Omega is admitting that red could be in the jar, we know red could be in the jar. In the 12 color scenario, we already know this. I do not think that Omega mentioning red should effect our guess.
All this arguing about priors eerily resembles scholastics, balancing angels on the head of a pin. Okay I get it, we read Omega’s Bible differently: unlike me, you see no symbolic significance in the mention of red. Riiiiight. Now how about an experiment?
Agreed. For what it is worth, I do see some significance in the mention of red, but cannot figure out why and do not see the significance in the 12 color example. This keeps setting off a red flag in my head because it seems inconsistent. Any help in figuring out why would be nifty.
In terms of an experiment, I would not bet at all if given the option. If I had to choose, I would choose whichever option costs less and right it off as a forced expense.
In English: If Omega said he had a dollar claiming the next bead would be red and asked me what I bet I would bet nothing. If I had to pick a non-zero number I would pick the smallest available.
But that doesn’t seem very interesting at all.
Then each time Omega mentions another color, it increases the expected number of colors the beads come in.
I think Alicorn is operating under a strict “12 colors of beads” idea based on what a color is or is not. As best as I can tell, the problem is essentially, “Given a finite set of bead colors in a jar, what is the probability of getting any particular color from a hidden mixture of beads?” The trickiness is that each color could have a different amount in the jar, not that there are any number of colors.
Alicorn answered elsewhere that when the jar has an infinite set of possible options the probability of any particular option would be infinitesimal.
If the number of possible outcomes is finite, fixed and known, but no other information is given, then there’s a unique correct prior: the maxentropy prior that gives equal weight to each possibility.
(Again, though, this is your prior before Omega says anything; you then have to update it as soon as ve speaks, given your prior on ver motivations in bringing up a particular color first. That part is trickier.)
How would you update given the following scenarios (this is assuming finite, fixed, known possible outcomes)?
Omega asks you for the probability of a red bead being chosen from the jar
Omega asks you for the probability of “any particular object” being chosen
Omega asks you to name an object from the set and then asks you for the probability of that object being chosen
I don’t think #2 or #3 give me any new relevant information, so I wouldn’t update. (Omega could be “messing with me” by incorporating my sense of salience of certain colors into the game, but this suspicion would be information for my prior, and I don’t think I learn anything new by being asked #3.)
I would incrementally increase my probability of red in case #1, and decrease the others evenly, but I can’t satisfy myself with the justification for this at the moment. The space of all minds is vast; and while it would make sense for several instrumental reasons to question first about a more common color, we’re assuming that Omega doesn’t need or want anything from this encounter.
In the real-life cases which this is meant to model, though, like having a psychologist doing a study in place of Omega, I can model their mind by mine and realize that there are more studies in which I’d ask about a color I know is likely to come up, than studies in which I’d pick a specific less-likely color, and so I should update p(red) positively.
But probably not all the way to 1⁄2.