The twelve basic colors are so called because they are not kinds of other colors. Lilac and fuchsia are kinds of purple (I guess you could argue that fuchsia is a kind of red, instead, but pretend you couldn’t), and cyan is a kind of blue. Even if you pull out a navy bead and then a cyan bead, they are both kinds of blue in English; in Russian, they would be different colors as unalike as pink and red.
So you’re arguing that by definition, the basic color words define a mutually exclusive and exhaustive set. But there are colors near cyan which are not easy to categorize—the fairest description would be blue-green. In the least convenient world, when Omega asks you for odds on blue-green, you ask it if that color counts as blue and/or green, and it replies, “Neither; I treat blue-green as distinct from blue and green.” Then what do you do?
I was mentally categorizing that as “Omega deliberately screwing with you” by using English strangely, but perhaps that was unmotivated of me. But this gets into a grand metaphysical discussion about where colors begin and end, and whether there is real vagueness around their borders, and a whole messy philosophy of language hissy fit about universals and tropes and subjectivity and other things that make you sound awfully silly if you argue about them in public. I ignored it because the idea of the post wasn’t about colors, it was about probabilities.
That’s a shame, because uncertainty about the number of possible outcomes is a real and challenging statistical problem. See for example Inference for the binomial N parameter: A hierarchical Bayes approach(abstract)(full paper pdf) by Adrian Raftery. Raftery’s prior for the number of outcomes is 1/N, but you can’t use that for coherent betting.
I think there’s also the question of inferring the included name space and possibility space from the questions asked.
If he asks you about html color #FF0000 (which is red) after asking you about red, do you change your probability? Assuming he’s using 12 color words because he used ‘red’ is arbitrary.
Even with defined and distinct color terms, the question is, what of those colors are actual possibilities (colors in the jar) as opposed to logical possibilities (colors omega can name)
and I think THAT ties back to Elizer’s article about Job vs. Frodo.
I was mentally categorizing that as “Omega deliberately screwing with you” by using English strangely, but perhaps that was unmotivated of me. But this gets into a grand metaphysical discussion about where colors begin and end, and whether there is real vagueness around their borders, and a whole messy philosophy of language hissy fit about universals and tropes and subjectivity and other things that make you sound awfully silly if you argue about them in public. I ignored it because the idea of the post wasn’t about colors, it was about probabilities.
Personally, I think the intent has less to do with classifying colors strangely and more to do with finding a broader example where even less information is known. The misstep I think I took earlier had to do with assuming that the colors were just part of an example and the jar could theoretically hold items from an infinite set.
I get that when picking beads from the set of 12 colors it makes sense to guess that red will appear with a probability near 1⁄12. An infinite set, instead of 12, is interesting in terms of no information as well. As far as I can tell, there is no good argument for any particular member of the set. So, asking the question directly, what if the beads have integers printed on them? What am I supposed to do when Omega asks me about a particular number?
Unless you have a reason to believe that there is some constraint on what numbers could be used—if only a limited number of digits will fit on the bead, for example—your probability for each integer has to be infinitesimal.
You’re not allowed to do that. With a countably infinite set, your only option for priors that assign everything a number between 0 and 1 is to take a summable infinite series. (Exponential distributions, like that proposed by Peter above, are the most elegant for certain questions, but you can do p(n)=cn^{-2} or something else if you prefer to have slower decay of probabilities.)
In the case with colors rather than integers, a good prior on “first bead color, named in a form acceptable to Omega” would correspond to this: take this sort of distribution, starting with the most salient color names and working out from there, but being sure not to exceed 1 in total.
Of course, this is before Omega asks you anything. You then have to have some prior on Omega’s motivations, with respect to which you can update your initial prior when ve asks “Is it red?” And yes, you’ll be very metauncertain about both these priors… but you’ve got to pick something.
The point is that your probability for the “first” integers will not be infinitesimal. If you think that drops off too quickly, instead of 2 use 1+e or something. p(n) = e/(e+2) * (1+e)^(-|n|). And replace n with s(n) if you don’t like that ordering of integers. But regardless, there’s some N for which there is an n with |n|N such that p(n)/p(m) >> 1.
I wasn’t talking about limiting frequencies, so don’t ask me “how often?”
Would you bet $1 billion against my $1 that no number with absolute value smaller than 3^^^3 will come up? If not then you shouldn’t be assigning infinitesimal probability to those numbers.
I get the feeling that I am thinking about this incorrectly but am missing a key point. If someone out there can see it, please let me know.
I wasn’t talking about limiting frequencies, so don’t ask me “how often?”
Sorry.
If the set of possible options is all integers and Omega asks about a particular integer, why would the probability go up the smaller the number gets?
Would you bet $1 billion against my $1 that no number with absolute value smaller than 3^^^3 will come up? If not then you shouldn’t be assigning infinitesimal probability to those numbers.
Betting on ranges seems like a no brainer to me. If Omega comes and asks you to pick an integer and then asks me to bet on whether an object pulled from the jar will have an absolute value over or under that integer, I should always bet that the number will be higher than yours.
If I had a random number generator that could theoretically pull a random number from all integers, it seems weird to assume it will be small. As far as I know, such a random number generator is impossible. Assuming it is impossible, there must be a cap somewhere in the set of all integers. The catch is that we have no idea where this cap is. If you can write 3^^^3 I can write 3^^^3 + 1 which leads me to believe that no matter what number you pick, the cap will be significantly higher. As long as I can cover the costs of the bet, I should bet against you.
The math works like this:
Given the option to place $X against your $Y
That when you pick an integer Z
Omega will pull a number out of the jar that is greater than the absolute value of Z,
There is a way to express X / Y * Z + 1 and,
Assuming I am placing equal probabilities on each possible integer between 0 and X / Y * Z + 1,
I should always take the bet
A trivial example: If I bet $5 against your $1 and you pick the integer 100, I can easily imagine the number 501. 501⁄100 is greater than 5⁄1. I should take the bet.
The problem seems to be that I am placing equal probabilities on each possible integer while you favor numbers closer to 0. Favoring numbers like 1 or 2 makes a lot of sense if someone came up to me on the street with a bucket of balls with numbers printed on them. I would also consider the chances of pulling 1 to be much higher than 3^^^3.
So, perhaps, my misstep is thinking of Omega’s challenge as a purely theoretical puzzle and not associating it with the real world. In any case, I certainly do not want to give the impression that I think 3^^^3 is just as likely to appear as 42 in the real world. Of course, in the real world I wouldn’t bet on anything at all because I do not consider the information available to be useful in determining the correct action and I am ridiculously risk averse.
The problem seems to be that I am placing equal probabilities on each possible integer while you favor numbers closer to 0.
You are not doing so, since it is impossible. No such probability distribution exists. In fact you recognize this by saying there’s a cap somewhere out there, you just don’t know where. Well, this cap means that small numbers (smaller than the cap) have much, much higher probability (i.e. nonzero) than large numbers (those higher than the cap have zero probability).
Maybe this will serve as an intuition pump: suppose you’ve narrowed down your cap to just a few numbers. In fact, just N and 2N. You’ve given them each equal weight. Well, now p(1) = (1/N + 1/2N)/2 = 3⁄4 N, but p(N+k) = 1⁄4 N, and p(2N+k) = 0. The probability goes down as numbers get larger. Determine your priors over all the caps, compute the resulting distribution, and you’ll find p(n) eventually start to decrease.
The twelve basic colors are so called because they are not kinds of other colors. Lilac and fuchsia are kinds of purple (I guess you could argue that fuchsia is a kind of red, instead, but pretend you couldn’t), and cyan is a kind of blue. Even if you pull out a navy bead and then a cyan bead, they are both kinds of blue in English; in Russian, they would be different colors as unalike as pink and red.
So you’re arguing that by definition, the basic color words define a mutually exclusive and exhaustive set. But there are colors near cyan which are not easy to categorize—the fairest description would be blue-green. In the least convenient world, when Omega asks you for odds on blue-green, you ask it if that color counts as blue and/or green, and it replies, “Neither; I treat blue-green as distinct from blue and green.” Then what do you do?
I was mentally categorizing that as “Omega deliberately screwing with you” by using English strangely, but perhaps that was unmotivated of me. But this gets into a grand metaphysical discussion about where colors begin and end, and whether there is real vagueness around their borders, and a whole messy philosophy of language hissy fit about universals and tropes and subjectivity and other things that make you sound awfully silly if you argue about them in public. I ignored it because the idea of the post wasn’t about colors, it was about probabilities.
That’s a shame, because uncertainty about the number of possible outcomes is a real and challenging statistical problem. See for example Inference for the binomial N parameter: A hierarchical Bayes approach (abstract)(full paper pdf) by Adrian Raftery. Raftery’s prior for the number of outcomes is 1/N, but you can’t use that for coherent betting.
I think there’s also the question of inferring the included name space and possibility space from the questions asked.
If he asks you about html color #FF0000 (which is red) after asking you about red, do you change your probability? Assuming he’s using 12 color words because he used ‘red’ is arbitrary.
Even with defined and distinct color terms, the question is, what of those colors are actual possibilities (colors in the jar) as opposed to logical possibilities (colors omega can name)
and I think THAT ties back to Elizer’s article about Job vs. Frodo.
Personally, I think the intent has less to do with classifying colors strangely and more to do with finding a broader example where even less information is known. The misstep I think I took earlier had to do with assuming that the colors were just part of an example and the jar could theoretically hold items from an infinite set.
I get that when picking beads from the set of 12 colors it makes sense to guess that red will appear with a probability near 1⁄12. An infinite set, instead of 12, is interesting in terms of no information as well. As far as I can tell, there is no good argument for any particular member of the set. So, asking the question directly, what if the beads have integers printed on them? What am I supposed to do when Omega asks me about a particular number?
Unless you have a reason to believe that there is some constraint on what numbers could be used—if only a limited number of digits will fit on the bead, for example—your probability for each integer has to be infinitesimal.
You’re not allowed to do that. With a countably infinite set, your only option for priors that assign everything a number between 0 and 1 is to take a summable infinite series. (Exponential distributions, like that proposed by Peter above, are the most elegant for certain questions, but you can do p(n)=cn^{-2} or something else if you prefer to have slower decay of probabilities.)
In the case with colors rather than integers, a good prior on “first bead color, named in a form acceptable to Omega” would correspond to this: take this sort of distribution, starting with the most salient color names and working out from there, but being sure not to exceed 1 in total.
Of course, this is before Omega asks you anything. You then have to have some prior on Omega’s motivations, with respect to which you can update your initial prior when ve asks “Is it red?” And yes, you’ll be very metauncertain about both these priors… but you’ve got to pick something.
I am happy with that explanation. Thanks.
Why not, say, p(n) = (1/3) * 2^(-|n|)?
If p(n) = (1/3) * 2^(-|n|), then:
p(1) = (1/3) * 2^(-1) = 0.166666667
p(86) = (1 / 3) * (2^(-86)) = 4.30823236 × 10^(-27)
p(1 000 000) = (1/3) * 2^(-1 000 000) = Lower than Google’s calculator lets me go
Are you willing to bet that 1 is going to happen that much more often than 1,000,000?
The point is that your probability for the “first” integers will not be infinitesimal. If you think that drops off too quickly, instead of 2 use 1+e or something. p(n) = e/(e+2) * (1+e)^(-|n|). And replace n with s(n) if you don’t like that ordering of integers. But regardless, there’s some N for which there is an n with |n|N such that p(n)/p(m) >> 1.
I wasn’t talking about limiting frequencies, so don’t ask me “how often?”
Would you bet $1 billion against my $1 that no number with absolute value smaller than 3^^^3 will come up? If not then you shouldn’t be assigning infinitesimal probability to those numbers.
I get the feeling that I am thinking about this incorrectly but am missing a key point. If someone out there can see it, please let me know.
Sorry.
If the set of possible options is all integers and Omega asks about a particular integer, why would the probability go up the smaller the number gets?
Betting on ranges seems like a no brainer to me. If Omega comes and asks you to pick an integer and then asks me to bet on whether an object pulled from the jar will have an absolute value over or under that integer, I should always bet that the number will be higher than yours.
If I had a random number generator that could theoretically pull a random number from all integers, it seems weird to assume it will be small. As far as I know, such a random number generator is impossible. Assuming it is impossible, there must be a cap somewhere in the set of all integers. The catch is that we have no idea where this cap is. If you can write 3^^^3 I can write 3^^^3 + 1 which leads me to believe that no matter what number you pick, the cap will be significantly higher. As long as I can cover the costs of the bet, I should bet against you.
The math works like this:
Given the option to place $X against your $Y
That when you pick an integer Z
Omega will pull a number out of the jar that is greater than the absolute value of Z,
There is a way to express X / Y * Z + 1 and,
Assuming I am placing equal probabilities on each possible integer between 0 and X / Y * Z + 1,
I should always take the bet
A trivial example: If I bet $5 against your $1 and you pick the integer 100, I can easily imagine the number 501. 501⁄100 is greater than 5⁄1. I should take the bet.
The problem seems to be that I am placing equal probabilities on each possible integer while you favor numbers closer to 0. Favoring numbers like 1 or 2 makes a lot of sense if someone came up to me on the street with a bucket of balls with numbers printed on them. I would also consider the chances of pulling 1 to be much higher than 3^^^3.
So, perhaps, my misstep is thinking of Omega’s challenge as a purely theoretical puzzle and not associating it with the real world. In any case, I certainly do not want to give the impression that I think 3^^^3 is just as likely to appear as 42 in the real world. Of course, in the real world I wouldn’t bet on anything at all because I do not consider the information available to be useful in determining the correct action and I am ridiculously risk averse.
To dispel this confusion, you should read on algorithmic information theory.
Is there a good place to start online? Can I just Google “algorithmic information theory”?
Google first and ask questions later. ;-)
You are not doing so, since it is impossible. No such probability distribution exists. In fact you recognize this by saying there’s a cap somewhere out there, you just don’t know where. Well, this cap means that small numbers (smaller than the cap) have much, much higher probability (i.e. nonzero) than large numbers (those higher than the cap have zero probability).
Maybe this will serve as an intuition pump: suppose you’ve narrowed down your cap to just a few numbers. In fact, just N and 2N. You’ve given them each equal weight. Well, now p(1) = (1/N + 1/2N)/2 = 3⁄4 N, but p(N+k) = 1⁄4 N, and p(2N+k) = 0. The probability goes down as numbers get larger. Determine your priors over all the caps, compute the resulting distribution, and you’ll find p(n) eventually start to decrease.