To make a bit of a point here, which might clarify the discussion:
A first problem with this is that there is no sharp distinction between purely computational (analytic) information/observations and purely empirical (synthetic) information/observations. This is a deep philosophical point, well-known in the analytic philosophy literature, and best represented by Quine’s Two dogmas of empiricism, and his idea of the “Web of Belief”. (This is also related to Radical Probabilisim.) But it’s unclear if this philosophical problem translates to a pragmatic one. So let’s just assume that the laws of physics are such that all superintelligences we care about converge on the same classification of computational vs empirical information.
I’d say the major distinction between logical/mathematical/computational uncertainty and empirical uncertainty which Quine ignored is that empirical uncertainty consists of the problem of starting from a prior and updating, where the worlds/hypotheses being updated upon are all as self-consistent/real as each other, and thus even with infinite compute, observing empirical evidence actually means we can get new information, since it reduces the number of possible states we can be in.
Meanwhile, logical/mathematical/computational uncertainty is a case where you know a-priori that there is only 1 correct answer, and the reason why you are uncertain is solely due to the boundedness of yourself. If you had infinite compute like a model of computation below, you could in principle compute the correct answer, which applies everywhere. This is why logical uncertainty was so hard, in that since there was only 1 possible answer, it just required computing time, it meant that logical uncertainty screwed with update procedures, and the theoretical solution is logical induction.
Note I haven’t solved the other problems of updating on computations/stuff where there is only 1 correct answer vs being updateless on empirical uncertainty when multiple correct answers are allowed.
To make a bit of a point here, which might clarify the discussion:
I’d say the major distinction between logical/mathematical/computational uncertainty and empirical uncertainty which Quine ignored is that empirical uncertainty consists of the problem of starting from a prior and updating, where the worlds/hypotheses being updated upon are all as self-consistent/real as each other, and thus even with infinite compute, observing empirical evidence actually means we can get new information, since it reduces the number of possible states we can be in.
Meanwhile, logical/mathematical/computational uncertainty is a case where you know a-priori that there is only 1 correct answer, and the reason why you are uncertain is solely due to the boundedness of yourself. If you had infinite compute like a model of computation below, you could in principle compute the correct answer, which applies everywhere. This is why logical uncertainty was so hard, in that since there was only 1 possible answer, it just required computing time, it meant that logical uncertainty screwed with update procedures, and the theoretical solution is logical induction.
Model of computation:
https://arxiv.org/abs/1806.08747
Logical induction:
https://arxiv.org/abs/1609.03543
Note I haven’t solved the other problems of updating on computations/stuff where there is only 1 correct answer vs being updateless on empirical uncertainty when multiple correct answers are allowed.