This is not really what the problem discussed in this post about. Given a setting where there are many possible worlds for all kinds of alternative observations, we have three basic kinds of uncertainty: logical uncertainty, uncertainty about the joint state of all possible worlds (“state uncertainty”), and uncertainty about location within the collection of these possible worlds (indexical uncertainty). If there are enough possible worlds in our setting, then most observations of the kind “Is this box empty?” cash out as indexical uncertainty: in some possible worlds, it’s empty, and in others it’s not, so the only question is about which worlds it’s empty in, a question of finding the locations within the overall collection that fit the query.
Of these, logical uncertainty is closer to state uncertainty than to indexical uncertainty: if you figure out some abstract fact, that may also tell you what all possible (non-broken) calculators will say, but some of the boxes will be full, and some will be empty. Of course, there is no clear dividing line, it’s the structure of the collection of your possible worlds and prior over it that tells you what observations are more like calculators (related to abstract facts), and which are more like boxes (unrelated to abstract facts, mostly only telling you which possible worlds you observe).
(The UDT’s secret weapon is that it reduces all observations to indexical uncertainty, it completely ignores their epistemic significance (interpretation as abstract facts), and instead relies on its own “protected” inference capacity, to resolve decision problems that are set up across its collection of possible worlds in arbitrarily bizarre fashion. But when it starts relying on observations, it has to be more clever than that.)
Now, you are talking about how logical uncertainty is similar to state uncertainty, which I mostly agree with, while the problem under discussion is that logical uncertainty seems to be unlike indexical uncertainty, in particular for the purposes of applying UDT-like reasoning.
This is not really what the problem discussed in this post about. Given a setting where there are many possible worlds for all kinds of alternative observations, we have three basic kinds of uncertainty: logical uncertainty, uncertainty about the joint state of all possible worlds (“state uncertainty”), and uncertainty about location within the collection of these possible worlds (indexical uncertainty). If there are enough possible worlds in our setting, then most observations of the kind “Is this box empty?” cash out as indexical uncertainty: in some possible worlds, it’s empty, and in others it’s not, so the only question is about which worlds it’s empty in, a question of finding the locations within the overall collection that fit the query.
Of these, logical uncertainty is closer to state uncertainty than to indexical uncertainty: if you figure out some abstract fact, that may also tell you what all possible (non-broken) calculators will say, but some of the boxes will be full, and some will be empty. Of course, there is no clear dividing line, it’s the structure of the collection of your possible worlds and prior over it that tells you what observations are more like calculators (related to abstract facts), and which are more like boxes (unrelated to abstract facts, mostly only telling you which possible worlds you observe).
(The UDT’s secret weapon is that it reduces all observations to indexical uncertainty, it completely ignores their epistemic significance (interpretation as abstract facts), and instead relies on its own “protected” inference capacity, to resolve decision problems that are set up across its collection of possible worlds in arbitrarily bizarre fashion. But when it starts relying on observations, it has to be more clever than that.)
Now, you are talking about how logical uncertainty is similar to state uncertainty, which I mostly agree with, while the problem under discussion is that logical uncertainty seems to be unlike indexical uncertainty, in particular for the purposes of applying UDT-like reasoning.