You do not have beliefs about the FTA; you have opinions on the usefulness of the definitions which imply it.
This is false as a psychological description of my personal state of mind. I don’t know the precise definitions that entail the FTA and I certainly don’t know a proof. (In particular, I don’t think I could give you a correct construction or definition for the real numbers.) I believe in the theorem because I’ve seen it asserted in trustworthy reference works. Somebody somewhere might have beliefs about the theorem that were tied to their beliefs in the definitions, but this doesn’t describe me. I can believe the [deductive] consequences of a claim without knowing the definitions or being able to reproduce the deduction.
Here’s a related example with a larger bullet for you to chew on. Suppose I have a (small) computer program that takes arbitrary-sized inputs. I might believe that will work correctly on all possible inputs. Is that a belief or not? It can be made as rigorously provably correct as the FTA.
When I say “the program is correct”, I am not saying “it is useful to construe the C language and the program code in such a way that...”. I’m making an assertion about how the program would behave under all possible inputs.
Beliefs about computer programs might feel more empirical than beliefs about theorems, but they are logically equivalent, so either both or neither are beliefs, it seems.
Please observe that one of the possible inputs to your computer is “A cosmic ray flips a bit and turns JMP into NOP, causing data to be executed as though it were code”. In other words, your proof of correctness relies on assumptions about what happens in the physical computer. Those assumptions are testable beliefs, just like the intuitions that go into geometry or the FTA.
This is false as a psychological description of my personal state of mind. I don’t know the precise definitions that entail the FTA and I certainly don’t know a proof. (In particular, I don’t think I could give you a correct construction or definition for the real numbers.) I believe in the theorem because I’ve seen it asserted in trustworthy reference works. Somebody somewhere might have beliefs about the theorem that were tied to their beliefs in the definitions, but this doesn’t describe me. I can believe the [deductive] consequences of a claim without knowing the definitions or being able to reproduce the deduction.
Here’s a related example with a larger bullet for you to chew on. Suppose I have a (small) computer program that takes arbitrary-sized inputs. I might believe that will work correctly on all possible inputs. Is that a belief or not? It can be made as rigorously provably correct as the FTA.
When I say “the program is correct”, I am not saying “it is useful to construe the C language and the program code in such a way that...”. I’m making an assertion about how the program would behave under all possible inputs.
Beliefs about computer programs might feel more empirical than beliefs about theorems, but they are logically equivalent, so either both or neither are beliefs, it seems.
Please observe that one of the possible inputs to your computer is “A cosmic ray flips a bit and turns JMP into NOP, causing data to be executed as though it were code”. In other words, your proof of correctness relies on assumptions about what happens in the physical computer. Those assumptions are testable beliefs, just like the intuitions that go into geometry or the FTA.