“We’ll bypass the novice mistake of calling it .5, of course; just because the options are binary (red or non-red) doesn’t make them equally likely. It’s not like you have any information.”
Well, if you truly had no information, 0.5 would be the correct (entropy maximizing given constraints) bet. If you have no information you can call it “A or !A” or ”!B or B” and it sounds the same- you can’t say one is more likely.
By assigning a different probability, you’re saying that you have information that makes the word “red” means something to you, and that it’s less likely than half (say because there are 11 other “colors”).
Likewise, if I say how likely is A and how likely is !A? You have to say 0.5. If A turns out to be “I’m gonna win the lottery tomorrow” then you can update and P goes to near zero. You didn’t screw up though, since It could have just as easily been “I won’t win the lottery tomorrow”. If you don’t think that it’s just as likely, then that is information.
When you hear people saying “winning the lottery is 50⁄50 because either you win or you don’t”, their error isn’t that they “naively” predict 0.5 in total absence of information. Their problem is that they don’t update on the information that they do have.
Well, I suppose you do have information inasmuch as you know what colors are. But if your probability for red is .5, on the basis of knowing that it’s a color alone, then you have to have the same probabilities for blue and yellow and green and brown and so forth if Omega asks for those too, and you can be Dutch booked like crazy.
But if your probability for red is .5, on the basis of knowing that it’s a color alone, then you have to have the same probabilities for blue and yellow and green and brown and so forth if Omega asks for those too, and you can be Dutch booked like crazy.
If Omega asks “What is p(red)?” then I may well consider that I have no information and reply 0.5.
If Omega then asks me “What are p(blue) and p(yellow)?” then I have new information. I would update p(red), p(blue) and p(yellow) to new numbers. By induction I would probably assign each a somewhat lower probability than 0.33 by this stage since p(jar contains all basic colors) has increased.
The most important thing is that I would never have exclusive probabilities that simultaniously sum to greater than 1. I may, however, declare (or even bet on) a probability that I update downwards when given new information.
I may end up in a situation where all the bets I have laid sequentially have an expected loss when considered together. This is unfortunate but does not indicate an error of judgement. It simply suggests that at the time of my bet on red I did not expect the new bets or the information contained therein. In later bets I reject consistency bias.
If you can be Dutch booked about probabilities asked in close sequence, where you gain no new information except that a question was asked, I’d think that reflects a considerable failure of rationality. There are grounds to reject “temporal Dutch book arguments”, but this isn’t one; the time and the “new information” should both be negligible.
To put it differently, if you have no information about what beads are in the jar, then you have even less information about why Omega wants to know your probabilities for the sorts of beads in the jar. Omega is a weird dude. Omega asking you a question does not mean what it means when a human asks you the same question.
I reject the proposition “Omega asking a question supplies negligible new information”. What information you glean from Omega depends entirely on what your prior beliefs about likely Omega behavior are. It is not at all absurd for a rational entity to have beliefs representing the reasoning “if Omega asks about multiple basic colors then there is a higher probability that his bead jar contains beads selected from the basic colors than if he only asks about red beads”.
For my part I would definitely not assign p(red) = 0.5 on the first question and then update to p(red) = 1/n(basic colors). I would, however, lower p(red) by greater that 0.
Omega asking you a question does not mean what it means when a human asks you the same question.
That is true, it doesn’t. However, this is an argument for a zero information “0.5” probability, not against it. We (that is, yourself in your initial post and me in my replies here) are using inductive reasoning to assign probabilities based on our knowledge of human color labels. You have extracted information from “What is p(red)?” and used it to update from 0.5 in the direction of 1⁄12. The same process must also be allowed to apply when further questions are asked.
If “What is p(red)?” provokes me to even consider the number 0.083 in my reasoning then “What is p(red)?… What are p(blue) and p(yellow)?” will provoke me to consider 0.083 with greater weight. The question “What is p(darkturquoise)?” must also provoke me to consider a significantly lower figure.
Your probability of updating downwards should be (more or less; not exactly) equal to one minus your original probability, i.e. if your original probability is .25, your probability of updating downwards should be around .75. This is obvious, since if there is a one in four chance that the thing is so, there is a three out of four chance that you will find out that it is not so, when you find out whether it is so or not.
Conservation of expected evidence doesn’t mean that the chance of updating upwards is equal to the chance of updating downwards. It also takes into account the quantity of the change; i.e. my probability is .25, and I update upwards, I will have to update three times as much as if I had updated downwards.
What if you know jar A is 80% red and jar B is 0% red, and you know you’re looking at one of them, and your confidence that it’s A is 0.625? Then you have probability 0.5 that a bead chosen from the jar in front of you is red, but will update upwards with probability 0.625 if you’re given the information of which jar you’re looking at.
My comment assigns to a probability to updating upwards or downwards in a generic way when new information is given; your comment calculates based on “if you’re given the information of which jar you’re looking at”, which is more concrete. You could also be given other information which would make it more likely you’re looking at B.
Excuse me for making such a minor point, but I don’t think we have to give the same probability for each color. We have to guess at Omega’s motivation before we can guess at the distribution of bead colors in the jar. Do we have previous knowledge of Omegas? How about Omegas bearing bead filled jars?
I was assuming that you have never met an Omega, much less one bearing a bead jar, and that you know all the standard facts about Omega (e.g. what he says is true, etc.)
I think I would agree partially with both of you. If I assume that there is no information at all .5 is a good choice. Once a bead of any color is pulled out, I can start making guesses on a potential number of beads in the jar from the relative volumes of the jar and the bead, so if I know that there is a finite number of potential colors, I might take a guess as to what the probability of any particular color distribution is. Once a red bead is pulled, I might adjust probability that Omega is not screwing with me etc.
“We’ll bypass the novice mistake of calling it .5, of course; just because the options are binary (red or non-red) doesn’t make them equally likely. It’s not like you have any information.”
Well, if you truly had no information, 0.5 would be the correct (entropy maximizing given constraints) bet. If you have no information you can call it “A or !A” or ”!B or B” and it sounds the same- you can’t say one is more likely.
By assigning a different probability, you’re saying that you have information that makes the word “red” means something to you, and that it’s less likely than half (say because there are 11 other “colors”).
Likewise, if I say how likely is A and how likely is !A? You have to say 0.5. If A turns out to be “I’m gonna win the lottery tomorrow” then you can update and P goes to near zero. You didn’t screw up though, since It could have just as easily been “I won’t win the lottery tomorrow”. If you don’t think that it’s just as likely, then that is information.
When you hear people saying “winning the lottery is 50⁄50 because either you win or you don’t”, their error isn’t that they “naively” predict 0.5 in total absence of information. Their problem is that they don’t update on the information that they do have.
Well, I suppose you do have information inasmuch as you know what colors are. But if your probability for red is .5, on the basis of knowing that it’s a color alone, then you have to have the same probabilities for blue and yellow and green and brown and so forth if Omega asks for those too, and you can be Dutch booked like crazy.
If Omega asks “What is p(red)?” then I may well consider that I have no information and reply 0.5.
If Omega then asks me “What are p(blue) and p(yellow)?” then I have new information. I would update p(red), p(blue) and p(yellow) to new numbers. By induction I would probably assign each a somewhat lower probability than 0.33 by this stage since p(jar contains all basic colors) has increased.
The most important thing is that I would never have exclusive probabilities that simultaniously sum to greater than 1. I may, however, declare (or even bet on) a probability that I update downwards when given new information.
I may end up in a situation where all the bets I have laid sequentially have an expected loss when considered together. This is unfortunate but does not indicate an error of judgement. It simply suggests that at the time of my bet on red I did not expect the new bets or the information contained therein. In later bets I reject consistency bias.
If you can be Dutch booked about probabilities asked in close sequence, where you gain no new information except that a question was asked, I’d think that reflects a considerable failure of rationality. There are grounds to reject “temporal Dutch book arguments”, but this isn’t one; the time and the “new information” should both be negligible.
To put it differently, if you have no information about what beads are in the jar, then you have even less information about why Omega wants to know your probabilities for the sorts of beads in the jar. Omega is a weird dude. Omega asking you a question does not mean what it means when a human asks you the same question.
I reject the proposition “Omega asking a question supplies negligible new information”. What information you glean from Omega depends entirely on what your prior beliefs about likely Omega behavior are. It is not at all absurd for a rational entity to have beliefs representing the reasoning “if Omega asks about multiple basic colors then there is a higher probability that his bead jar contains beads selected from the basic colors than if he only asks about red beads”.
For my part I would definitely not assign p(red) = 0.5 on the first question and then update to p(red) = 1/n(basic colors). I would, however, lower p(red) by greater that 0.
That is true, it doesn’t. However, this is an argument for a zero information “0.5” probability, not against it. We (that is, yourself in your initial post and me in my replies here) are using inductive reasoning to assign probabilities based on our knowledge of human color labels. You have extracted information from “What is p(red)?” and used it to update from 0.5 in the direction of 1⁄12. The same process must also be allowed to apply when further questions are asked.
If “What is p(red)?” provokes me to even consider the number 0.083 in my reasoning then “What is p(red)?… What are p(blue) and p(yellow)?” will provoke me to consider 0.083 with greater weight. The question “What is p(darkturquoise)?” must also provoke me to consider a significantly lower figure.
Either questions give information or they don’t.
And it’s always .5, I hope.
Your probability of updating downwards should be (more or less; not exactly) equal to one minus your original probability, i.e. if your original probability is .25, your probability of updating downwards should be around .75. This is obvious, since if there is a one in four chance that the thing is so, there is a three out of four chance that you will find out that it is not so, when you find out whether it is so or not.
Conservation of expected evidence doesn’t mean that the chance of updating upwards is equal to the chance of updating downwards. It also takes into account the quantity of the change; i.e. my probability is .25, and I update upwards, I will have to update three times as much as if I had updated downwards.
You’re right. Thanks.
What if you know jar A is 80% red and jar B is 0% red, and you know you’re looking at one of them, and your confidence that it’s A is 0.625? Then you have probability 0.5 that a bead chosen from the jar in front of you is red, but will update upwards with probability 0.625 if you’re given the information of which jar you’re looking at.
My comment assigns to a probability to updating upwards or downwards in a generic way when new information is given; your comment calculates based on “if you’re given the information of which jar you’re looking at”, which is more concrete. You could also be given other information which would make it more likely you’re looking at B.
No, it’s not. (You either win the lottery, or you don’t.)
Excuse me for making such a minor point, but I don’t think we have to give the same probability for each color. We have to guess at Omega’s motivation before we can guess at the distribution of bead colors in the jar. Do we have previous knowledge of Omegas? How about Omegas bearing bead filled jars?
I was assuming that you have never met an Omega, much less one bearing a bead jar, and that you know all the standard facts about Omega (e.g. what he says is true, etc.)
I think I would agree partially with both of you. If I assume that there is no information at all .5 is a good choice. Once a bead of any color is pulled out, I can start making guesses on a potential number of beads in the jar from the relative volumes of the jar and the bead, so if I know that there is a finite number of potential colors, I might take a guess as to what the probability of any particular color distribution is. Once a red bead is pulled, I might adjust probability that Omega is not screwing with me etc.