With empirical uncertainty, it’s easier to abstract updating from reasoning. You can reason without restrictions, and avoid the need to update on new observations, because you are not making new observations. You can decide to make new observations at the time of your own choosing, and then again freely reason about how to update on them.
With logical uncertainty, reasoning simultaneously updates you on all kinds of logical claims that you didn’t necessarily set out to observe at this time, so the two processes are hard to disentangle. It would be nice to have better conceptual tools for describing what it means to have a certain state of logical uncertainty, and how it should be updated. But that doesn’t quite promise to solve the problem of reasoning always getting entangled with unintended logical updating.
With empirical uncertainty, it’s easier to abstract updating from reasoning. You can reason without restrictions, and avoid the need to update on new observations, because you are not making new observations. You can decide to make new observations at the time of your own choosing, and then again freely reason about how to update on them.
With logical uncertainty, reasoning simultaneously updates you on all kinds of logical claims that you didn’t necessarily set out to observe at this time, so the two processes are hard to disentangle. It would be nice to have better conceptual tools for describing what it means to have a certain state of logical uncertainty, and how it should be updated. But that doesn’t quite promise to solve the problem of reasoning always getting entangled with unintended logical updating.