In principle, I see no reason to treat logical probability differently from other probability—if this can be done consistently.
Say there were some empirical fact about the world, concealed inside a safe. And say you can crack the safe’s combination as soon as you can solve a certain logical fact. Then “ignorance about the contents of the safe”, a very standard type of ignorance, feels exactly like “ignorance about the logical fact” in this case.
I think you can generally transform one type of uncertainty into another in a way that leaves the intuitions virtually identical.
This is not really what the problem discussed in this post about. Given a setting where there are many possible worlds for all kinds of alternative observations, we have three basic kinds of uncertainty: logical uncertainty, uncertainty about the joint state of all possible worlds (“state uncertainty”), and uncertainty about location within the collection of these possible worlds (indexical uncertainty). If there are enough possible worlds in our setting, then most observations of the kind “Is this box empty?” cash out as indexical uncertainty: in some possible worlds, it’s empty, and in others it’s not, so the only question is about which worlds it’s empty in, a question of finding the locations within the overall collection that fit the query.
Of these, logical uncertainty is closer to state uncertainty than to indexical uncertainty: if you figure out some abstract fact, that may also tell you what all possible (non-broken) calculators will say, but some of the boxes will be full, and some will be empty. Of course, there is no clear dividing line, it’s the structure of the collection of your possible worlds and prior over it that tells you what observations are more like calculators (related to abstract facts), and which are more like boxes (unrelated to abstract facts, mostly only telling you which possible worlds you observe).
(The UDT’s secret weapon is that it reduces all observations to indexical uncertainty, it completely ignores their epistemic significance (interpretation as abstract facts), and instead relies on its own “protected” inference capacity, to resolve decision problems that are set up across its collection of possible worlds in arbitrarily bizarre fashion. But when it starts relying on observations, it has to be more clever than that.)
Now, you are talking about how logical uncertainty is similar to state uncertainty, which I mostly agree with, while the problem under discussion is that logical uncertainty seems to be unlike indexical uncertainty, in particular for the purposes of applying UDT-like reasoning.
I was under the impression that there were clear examples where logical uncertainty was different than regular uncertainty. I can’t think of any, though, so perhaps I’m missremembering. I would be very interested in the solution tho this.
In principle, I see no reason to treat logical probability differently from other probability—if this can be done consistently.
Say there were some empirical fact about the world, concealed inside a safe. And say you can crack the safe’s combination as soon as you can solve a certain logical fact. Then “ignorance about the contents of the safe”, a very standard type of ignorance, feels exactly like “ignorance about the logical fact” in this case.
I think you can generally transform one type of uncertainty into another in a way that leaves the intuitions virtually identical.
This is not really what the problem discussed in this post about. Given a setting where there are many possible worlds for all kinds of alternative observations, we have three basic kinds of uncertainty: logical uncertainty, uncertainty about the joint state of all possible worlds (“state uncertainty”), and uncertainty about location within the collection of these possible worlds (indexical uncertainty). If there are enough possible worlds in our setting, then most observations of the kind “Is this box empty?” cash out as indexical uncertainty: in some possible worlds, it’s empty, and in others it’s not, so the only question is about which worlds it’s empty in, a question of finding the locations within the overall collection that fit the query.
Of these, logical uncertainty is closer to state uncertainty than to indexical uncertainty: if you figure out some abstract fact, that may also tell you what all possible (non-broken) calculators will say, but some of the boxes will be full, and some will be empty. Of course, there is no clear dividing line, it’s the structure of the collection of your possible worlds and prior over it that tells you what observations are more like calculators (related to abstract facts), and which are more like boxes (unrelated to abstract facts, mostly only telling you which possible worlds you observe).
(The UDT’s secret weapon is that it reduces all observations to indexical uncertainty, it completely ignores their epistemic significance (interpretation as abstract facts), and instead relies on its own “protected” inference capacity, to resolve decision problems that are set up across its collection of possible worlds in arbitrarily bizarre fashion. But when it starts relying on observations, it has to be more clever than that.)
Now, you are talking about how logical uncertainty is similar to state uncertainty, which I mostly agree with, while the problem under discussion is that logical uncertainty seems to be unlike indexical uncertainty, in particular for the purposes of applying UDT-like reasoning.
I was under the impression that there were clear examples where logical uncertainty was different than regular uncertainty. I can’t think of any, though, so perhaps I’m missremembering. I would be very interested in the solution tho this.
Logical uncertainty is still a more subtle beastie methinks—but for the examples given here, I think it should be treated like normal uncertainty.