It seems to me that a theorem prover based on Peano arithmetic makes “is” claims that don’t depend on any “ought” claims. And if the prover is part of a larger agent with desires, the claims it makes are still the same, so they don’t suddenly depend on “ought” claims.
PA theorem provers aren’t reflective philosophical agents that can answer questions like “what is the origin of my axioms?”
To psychologize them to the point that they “have beliefs” or “make claims” is to interpret them according to a normative/functional theory of mind, such that it is conceivable that the prover could be broken.
A philosophical mathematician using a PA theorem prover uses a wide variety of oughts in ensuring that the theorem prover’s claims are trustworthy:
The prover’s axioms ought to correspond to the official PA axioms from a trustworthy source such as a textbook.
The prover ought to only prove valid theorems; it must not have bugs.
The programmer ought to reason about and test the code.
The user ought to learn and understand the mathematical notation and enter well-formed propositions.
Etc etc. Failure to adhere to these oughts undermines justification for the theorem prover’s claims.
Suppose I write a programme that spits out syntactically correct, but otherwise random statements. Is that a “theorem prover”? No, because it is not following any norms of reasoning. A theorem prover will follow norms programmed into it, or it is not a theorem prover. Of course it is not reflexively aware that is is following norms, any more than it is aware it is making claims. And it is not as if the ability of humans to follow norms is not at least partially “programmed” from the outside.
It seems to me that a theorem prover based on Peano arithmetic makes “is” claims that don’t depend on any “ought” claims. And if the prover is part of a larger agent with desires, the claims it makes are still the same, so they don’t suddenly depend on “ought” claims.
PA theorem provers aren’t reflective philosophical agents that can answer questions like “what is the origin of my axioms?”
To psychologize them to the point that they “have beliefs” or “make claims” is to interpret them according to a normative/functional theory of mind, such that it is conceivable that the prover could be broken.
A philosophical mathematician using a PA theorem prover uses a wide variety of oughts in ensuring that the theorem prover’s claims are trustworthy:
The prover’s axioms ought to correspond to the official PA axioms from a trustworthy source such as a textbook.
The prover ought to only prove valid theorems; it must not have bugs.
The programmer ought to reason about and test the code.
The user ought to learn and understand the mathematical notation and enter well-formed propositions.
Etc etc. Failure to adhere to these oughts undermines justification for the theorem prover’s claims.
Suppose I write a programme that spits out syntactically correct, but otherwise random statements. Is that a “theorem prover”? No, because it is not following any norms of reasoning. A theorem prover will follow norms programmed into it, or it is not a theorem prover. Of course it is not reflexively aware that is is following norms, any more than it is aware it is making claims. And it is not as if the ability of humans to follow norms is not at least partially “programmed” from the outside.