I’d say things differently now. I’d drop the distinction between “logical uncertainty” and uncertainty about the output of one’s source code, as knowledge about a formal system basically is a program that you can run, which basically is part of your source code (maybe with observed data, but then data became part of you—what distinguishes you observing an event from the event observing you? -- it’s more like merging). The important intuition in this case is that there is no transparency, that having a source code of a program is not at all the same thing as knowing how it behaves (it’s not even about the halting problem, as simple calculations are still some computational steps away—although static analysis (abstraction) may allow to run infinitely faster). You are not uncertain about your source code, you are uncertain about what it’ll do. Logical hypotheticals can be seen as playing the central role in decision-making, as the steps in proof search that suggests the steps that one’s own (known) algorithms could do, and seeing whether they should be made real (“winning”, in games semantics terminology, which is highly misleading from goal-directed strategy point of view, as they only won your choice, not the “game”). While you can’t reach some logical truths in a limited time, you can consider their hypothetical states, thus the program isn’t so much being modified, as it is being refined where its consequences can’t be directly observed (with naive formalism the difference between the program and its effect blurs). I still have serious gaps in my understanding of this stuff, so am not ready to describe it yet.
This doesn’t seem to require slaughtering the intuition that “a logical truth couldn’t be either way” because I can think that a logical truth couldn’t be either way but I just don’t know which way it is, and that still allows me to make the right decision. Do you agree, or do you still think that intuition needs to go?
If things that “could” be done or “could” happen are ones considered in hypotheticals during decision-making, then logical truths (possible behaviors of a program) should be comfortable as things that could be either way.
I’d say things differently now. I’d drop the distinction between “logical uncertainty” and uncertainty about the output of one’s source code, as knowledge about a formal system basically is a program that you can run, which basically is part of your source code (maybe with observed data, but then data became part of you—what distinguishes you observing an event from the event observing you? -- it’s more like merging). The important intuition in this case is that there is no transparency, that having a source code of a program is not at all the same thing as knowing how it behaves (it’s not even about the halting problem, as simple calculations are still some computational steps away—although static analysis (abstraction) may allow to run infinitely faster). You are not uncertain about your source code, you are uncertain about what it’ll do. Logical hypotheticals can be seen as playing the central role in decision-making, as the steps in proof search that suggests the steps that one’s own (known) algorithms could do, and seeing whether they should be made real (“winning”, in games semantics terminology, which is highly misleading from goal-directed strategy point of view, as they only won your choice, not the “game”). While you can’t reach some logical truths in a limited time, you can consider their hypothetical states, thus the program isn’t so much being modified, as it is being refined where its consequences can’t be directly observed (with naive formalism the difference between the program and its effect blurs). I still have serious gaps in my understanding of this stuff, so am not ready to describe it yet.
If things that “could” be done or “could” happen are ones considered in hypotheticals during decision-making, then logical truths (possible behaviors of a program) should be comfortable as things that could be either way.