For me, the Omega problem described in the post presents the following conundrum: what is a probability in the limit of no information?
Suppose we employ a pragmatic perspective: the “probability of an event”, as a mathematical object, is a tool that is used to summarize information about the past and/or future occurrence of that event. In the limit of no information, using the pragmatic view, there is no justification for assigning a probability not because we don’t know what it is, but because it has no use in summarizing information.
If you don’t care about being pragmatic, if you want to define a probability because everything must have a probability between 0 and 1, then I don’t see how you would be justified in eliminating information (constraining the space of possible probabilities) or making up information (arbitrarily picking a probability) by specifying a value for the probability.
So finally, if Omega asked me what is the probability of him choosing a red ball, I would be forced to say something like, “To the extent that I can guess what you might be meaning by probability—respectfully, Omega, it’s not entirely well-defined—the probability is p where 0<p<1.”
How could I truthfully say more?
I find this problem interesting because it’s a gray area in the correct approach to working with definitions. One of the perspectives I’ve encountered on Less Wrong is that in “real world” situations you need to let definitions be defined as you go. Finding the solution to a problem involves finding the right/good/proper definition, so that the definition you end up with tells you which world you’re in. See Disputing Definitions).
This problem represents a context where you cannot find a good workable definition for probability (other than as the original abstract mathematical object), because you’re not allowed to constrain which world you’re in, because that would require making up information.
For me, the Omega problem described in the post presents the following conundrum: what is a probability in the limit of no information?
Suppose we employ a pragmatic perspective: the “probability of an event”, as a mathematical object, is a tool that is used to summarize information about the past and/or future occurrence of that event. In the limit of no information, using the pragmatic view, there is no justification for assigning a probability not because we don’t know what it is, but because it has no use in summarizing information.
If you don’t care about being pragmatic, if you want to define a probability because everything must have a probability between 0 and 1, then I don’t see how you would be justified in eliminating information (constraining the space of possible probabilities) or making up information (arbitrarily picking a probability) by specifying a value for the probability.
So finally, if Omega asked me what is the probability of him choosing a red ball, I would be forced to say something like, “To the extent that I can guess what you might be meaning by probability—respectfully, Omega, it’s not entirely well-defined—the probability is p where 0<p<1.”
How could I truthfully say more?
I find this problem interesting because it’s a gray area in the correct approach to working with definitions. One of the perspectives I’ve encountered on Less Wrong is that in “real world” situations you need to let definitions be defined as you go. Finding the solution to a problem involves finding the right/good/proper definition, so that the definition you end up with tells you which world you’re in. See Disputing Definitions).
This problem represents a context where you cannot find a good workable definition for probability (other than as the original abstract mathematical object), because you’re not allowed to constrain which world you’re in, because that would require making up information.