No, it’s saying that there is no possible interpretation of the system’s behavior in which it behaves like PA—not just that a particular interpretation fails to match.
I’m not saying that they’re don’t exist things which behave like PA.
I’m saying that there exist things which cannot be interpreted as behaving like PA, under any interpretation (where “interpretation” = homomorphism). On the other hand, there are also things which do behave like PA. So, there is a rigorous sense in which some systems do embed PA, and others do not.
The same concept yields a general notion of “is”, entirely independent of any notion of “ought”: we have some system which takes in a “territory”, and produces a (supposed) “map” of the territory. For some such systems, there is not any interpretation whatsoever under which the “map” produced will actually match the territory. For other systems, there is an interpretation under which the map matches the territory. So, there is a rigorous sense in which some systems produce accurate maps of territory, and others do not, entirely independent of any “ought” claims.
I agree that once you have a fixed abstract algorithm A and abstract algorithm B, it may or may not be the case that there exists a homomorphism from A to B justifying the claim that A implements B. Sorry for misunderstanding.
But the main point in my PA comment still stands: to have justified belief that some theorem prover implements PA, a philosophical mathematician must follow oughts.
(When you’re talking about naive Bayes or a theorem prover as if it has “a map” you’re applying a teleological interpretation (that that object is supposed to correspond with some territory / be coherent / etc), which is not simply a function of the algorithm itself)
No, it’s saying that there is no possible interpretation of the system’s behavior in which it behaves like PA—not just that a particular interpretation fails to match.
Doesn’t a correct PA theorem prover behave like a bounded approximation of PA?
I’m not saying that they’re don’t exist things which behave like PA.
I’m saying that there exist things which cannot be interpreted as behaving like PA, under any interpretation (where “interpretation” = homomorphism). On the other hand, there are also things which do behave like PA. So, there is a rigorous sense in which some systems do embed PA, and others do not.
The same concept yields a general notion of “is”, entirely independent of any notion of “ought”: we have some system which takes in a “territory”, and produces a (supposed) “map” of the territory. For some such systems, there is not any interpretation whatsoever under which the “map” produced will actually match the territory. For other systems, there is an interpretation under which the map matches the territory. So, there is a rigorous sense in which some systems produce accurate maps of territory, and others do not, entirely independent of any “ought” claims.
I agree that once you have a fixed abstract algorithm A and abstract algorithm B, it may or may not be the case that there exists a homomorphism from A to B justifying the claim that A implements B. Sorry for misunderstanding.
But the main point in my PA comment still stands: to have justified belief that some theorem prover implements PA, a philosophical mathematician must follow oughts.
(When you’re talking about naive Bayes or a theorem prover as if it has “a map” you’re applying a teleological interpretation (that that object is supposed to correspond with some territory / be coherent / etc), which is not simply a function of the algorithm itself)