I know very well the difference between a collection of axioms and a collection of models of which those axioms are true, thank you.
A lot of people seem to have trouble imagining what it means to consider the hypothesis that SS0+SS0 = SSS0 is true in all models of arithmetic, for purposes of deriving predictions which distinguish it from what we should see given the alternative hypothesis that SS0+SS0=SSSS0 is true in all models of arithmetic, thereby allowing internal or external experience to advise you on which of these alternative hypotheses is true.
I know very well the difference between a collection of axioms and a collection of models of which those axioms are true, thank you.
Then why do you persist in saying things like “I don’t believe in [Axiom X]/[Mathematical Object Y]”? If this distinction that you are so aptly able to rehearse were truly integrated into your understanding, it wouldn’t occur to you to discuss whether you have “seen” a particular cardinal number.
I understand the point you wanted to make in this post, and it’s a valid one. All the same, it’s extremely easy to slip from empiricism to Platonism when discussing mathematics, and parts of this post can indeed be read as betraying that slip (to which you have explicitly fallen victim on other occasions, the most recent being the thread I linked to).
I don’t think people really understood what I was talking about in that thread. I would have to write a sequence about
the difference between first-order and second-order logic
why the Lowenheim-Skolem theorems show that you can talk about integers or reals in higher-order logic but not first-order logic
why third-order logic isn’t qualitatively different from second-order logic in the same way that second-order logic is qualitatively above first-order logic
the generalization of Solomonoff induction to anthropic reasoning about agents resembling yourself who appear embedded in models of second-order theories, with more compact axiom sets being more probable a priori
how that addresses some points Wei Dai has made about hypercomputation not being conceivable to agents using Solomonoff induction on computable Cartesian environments, as well as formalizing some of the questions we argue about in anthropic theory
why seeing apparently infinite time and apparently continuous space suggests, to an agent using second-order anthropic induction, that we might be living within a model of axioms that imply infinity and continuity
why believing that things like a first uncountable ordinal can contain reality-fluid in the same way as the wavefunction, or even be uniquely specified by second-order axioms that pin down a single model up to isomorphism the way that second-order axioms can pin down integerness and realness, is something we have rather less evidence for, on the surface of things, than we have evidence favoring the physical existability of models of infinity and continuity, or the mathematical sensibility of talking about the integers or real numbers.
Everything sounded perfectly good until the last bullet:
why believing that things like a first uncountable ordinal can contain reality-fluid in the same way as the wavefunction
ERROR: CATEGORY. “Wavefunction” is not a mathematical term, it is a physical term. It’s a name you give to a mathematical object when it is being used to model the physical world in a particular way, in the specific context of that modeling-task. The actual mathematical object being used as the wavefunction has a mathematical existence totally apart from its physical application, and that mathematical existence is of the exact same nature as that of the first uncountable ordinal; the (mathematical) wavefunction does not gain any “ontological bonus points” for its role in physics.
or even be uniquely specified by second-order axioms that pin down a single model up to isomorphism the way that second-order axioms can pin down integerness and realness
Pinning down a single model up to isomorphism might be a nice property for a set of axioms to have, but it is not “reality-conferring”: there are two groups of order 4 up to isomorphism, while there is only one of order 3; yet that does not make “group of order 3“ a “more real” mathematical object than “group of order 4”.
But that does not imply that you can’t talk about integers or reals in first order logic. And indeed you can talk about integers and real numbers using first-order logic, people do so all the time.
Only in the same sense that you can talk about kittens by saying “Those furry things!” There’ll always be some ambiguity over whether you’re talking about kittens or lions, even though kittens are in fact furry and have all the properties that you can deduce to hold true of furry things.
Yes, and that’s OK. I suspect you can’t do qualitatively better than that (viz ambient set-theoretic universe for second-order logic), but it’s still possible (necessary?) to work under this apparent lack of absolute control over what it is you are dealing with. Even though (first order) PA doesn’t know what “integers” are, it’s still true that the statements it believes valid are true for “integers”, it’s useful that way (just as AIs or humans are useful for making the world better). It is a device that perceives some of the properties of the object we study, but not all, not enough to rebuild it completely. (Other devices can form similarly imperfect pictures of the object of study and its relationship with the device perceiving it, or of themselves perceiving this process, or of the object of study being affected by behavior of some of these devices.)
Likewise, we may fail to account for all worlds that we might be affecting by our decisions, but we mostly care about (or maybe rather have non-negligible consequentialist control over) “real world” (or worlds), whatever this is, and it’s true that our conclusions capture some truth about this “real world”, even if it’s genuinely impossible for us to ever know completely what it is. (We of course “know” plenty more than was ever understood, and it’s a big question how to communicate to a FAI what we do know.)
Not in the same sense at all. All of the numbers that you have ever physically encountered were nameable, definable, computable. Moreover they came to you with algorithms for verifying that one of them was equal to another.
I don’t believe it’s good math until it becomes possible to talk about the first uncountable ordinal, in the way that you can talk about the integers. Any first-order theory of the integers, like first-order PA, will have some models containing supernatural numbers, but there are many different sorts of models of supernatural numbers, you couldn’t talk about the supernaturals the way you can talk about 3 or the natural numbers. My skepticism about “the first uncountable ordinal” is that there would not exist any canonicalizable mathematical object—nothing you could ever pin down uniquely—that would ever contain the first uncountable ordinal inside it, because of the indefinitely extensible character of well-ordering. This is a sort of skepticism of Platonic existence—when that which you thought you wanted to talk about can never be pinned down even in second-order logic, nor in any other language which does not permit of paradox.
You seem to keep forgetting that the whole notion of “second-order logic” does not make sense without some ambient set theory. (Unless I am greatly misunderstanding how second-order logic works?) And if you have that, then you can pin down the natural numbers (and the first uncountable ordinal) in first-order terms in this larger theory.
Only to the same degree that first-order logic requires an ambient group of models (not necessarily sets) to make sense. It’s just that the ambient models in the second-order theory include collections of possible predicates of any objects that get predicates attached, or if you prefer, people who speak in second-order logic think that it makes as much sense to say “all possible collections that include some objects and exclude others, but still include and exclude only individual objects” as “all objects”.
Only to the same degree that first-order logic requires an ambient group of models (not necessarily sets) to make sense.
Well, it makes sense to me without any models. I can compute, prove theorems, verify proofs of theorems and so on happily without ever producing a “model” for the natural numbers in toto, whatever that could mean.
Okay… I now know what an ordinal number actually is. And I’m trying to make more sense out of your comment...
So, re-reading this:
or even be uniquely specified by second-order axioms that pin down a single model up to isomorphism the way that second-order axioms can pin down integerness and realness, is something we have rather less evidence for
So if I understand you correctly, you don’t trust anything that can’t be defined up to isomorphism in second-order logic, and “the set of all countable ordinals” is one of those things?
(I never learned second order logic in college...)
Hmm, funny you should treat “I don’t believe in [Mathematical Object Y]” as Platonism. I generally characterise my ‘syntacticism’ (wh. I intend to explain more fully when I understand it the hell myself) as a “Platonic Formalism”; it is promiscuously inclusive of Mathematical Objects. If you can formulate a set of behaviours (inference rules) for it, then it has an existing Form—and that Form is the formalism (or… syntax) that encapsulates its behaviour. So in a sense, uncountable cardinals don’t exist—but the theory of uncountable cardinals does exist; similarly, the theory of finite cardinals exists but the number ‘2’ doesn’t.
This is of course bass-ackwards from a map-territory perspective; I am claiming that the map exists and the territory is just something we naïvely suppose ought to exist. After all, a map of non-existent territory is observationally equivalent to a map of manifest reality; unless you can observe the actual territory you can’t distinguish the two. Taking as assumption that the observe() function always returns an object Map, the idea that there is a territory gets Occamed out.
There is a good reason why I should want to do something so ontologically bizarre: by removing referents, and semantics, and manifest reality; by retaining only syntax, and rejecting the suggestion that one logic really “models” another, we finally solve the problems of Gödel (I’m a mathematician, not a philosopher, so I’m allowed to invoke Gödel without losing automatically) and the infinite descent when we say “first order logic is consistent because second-order logic proves it so, and we can believe second-order logic because third-order logic proves it consistent, and...”. When all you are doing is playing symbol games stripped of any semantics, “P ∧ ¬P” is just a string, and who cares if you can derive it from your axiomata? It only stops being a string when you apply your symbol games to what you unknowingly label as “manifest reality”, when you (essentially) claim that symbol game A (Peano arithmetic) models symbol game B (that part of the Physics game that deals with the objects you’ve identified as pebbles).
Platonism is not a means of excluding a mathematical object because it’s not one of the Forms; it is a means of allowing any mathematical object to have a Form whether you like it or not. I don’t believe in God, but there still exists a Form for a mathematical object that looks a lot like “a universe in which God exists”. It’s just that conceiving of a possible world only makes an arrow from your world to its, not an arrow in the reverse direction, hence why “A perfect God would have the quality of existence” is such a laughable non-starter :) What if someone broke out of a hypothetical situation in your room right now?
(I’m a mathematician, not a philosopher, so I’m allowed to invoke Gödel without losing automatically)
As a fellow mathematician, I want to point out that it doesn’t mean you win automatically, either. Just look at Voevodsky’s recent FOM talk at the IAS.
Well, of course I don’t win automatically. It’s just that there’s a kind of Godwin’s Law of philosophy, whereby the first to invoke Gödel loses by default.
I, at least, was not suggesting that you don’t know the difference, merely that your article failed to take account of the difference and was therefore confusing and initially unconvincing to me because I was taking account of that difference.
However (and it took me too damn long to realise this; I can’t wait for Logic and Set Theory this coming year), I wasn’t talking about “models” in the sense that pebbles are a Model of the Theory PA. I was talking in the sense that PA is a model of the behaviour observed in pebbles. If PA fails to model pebbles, that doesn’t mean PA is wrong, it just means that pebbles don’t follow PA. If a Model of PA exists in which SS0+SS0 = SSS0, then the Theory PA materially cannot prove that SS0+SS0 ≠ SSS0, and if such a proof has been constructed from the axiomata of the Theory then either the proof is in error (exists a step not justified by the inference rules), or the combination of axiomata and inference rules contains a contradiction (which can be rephrased as “under these inference rules, the Theory is not consistent”), or the claimed Model is not in fact a Model at all (in which case one of the axiomata does not, in fact, apply to it).
I should probably write down what I think I know about the epistemic status of mathematics and why I think I know it, because I’m pretty sure I disagree quite strongly with you (and my prior probability of me being right and you being wrong is rather low).
Reading your essay I wondered whether it would have been more effective if you had chosen bigger numbers than 2, 2, and 3. e.g. “How to convince me that 67+41 = 112.”
Maybe I’m misinterpreting you, but could you explain how any non-symmetric equation can possibly be true in all models of arithmetic?
The purpose of the article is only to describe some subjective experiences that would cause you to conclude that SS0+SS0 = SSS0 is true in all models of arithmetic. But Eliezer can only describe certain properties that those subjective experiences would have. He can’t make you have the experiences themselves.
So, for example, he could say that one such experience would conform to the following description: “You count up all the S’s on one side of the equation, and you count up all the S’s on the other side of the equation, and you find yourself getting the same answer again and again. You show the equation to other people, and they get the same answer again and again. You build a computer from scratch to count the S’s on both sides, and it says that there are the same number again and again.”
Such a description gives some features of an experience. The description provides a test that you could apply to any given experience and answer the question “Does this experience satisfy this description or not?” But the description is not like one in a novel, which, ideally, would induce you to have the experience, at least in your imagination. That is a separate and additional task beyond what this post set out to accomplish.
Yes, I am aware of that. However, I don’t think two pebbles on the table plus another two pebbles on the table resulting in three pebbles on the table could cause anyone sane to conclude that SS0 + SS0 = SSS0 is true in all models of arithmetic. In order to be convinced of that, you would have to assign “PA doesn’t apply to pebbles” a lower prior probability than “PA is wrong”.
The statement “PA applies to pebbles” (or anything else for that matter) doesn’t follow of the peano axioms in any way and is therefore not part of peano arithmetic. So what if peano arithmetic doesn’t apply to pebbles, there are other arithmetics that don’t either, and that doesn’t make them any wrong. You’re using them everyday in situations where they do apply.
A mathematical theory doesn’t consist of beliefs that are based on evidence; it’s an axiomatic system. There is no way any real-life situation could convince me that PA is false. Saying “SS0 + SS0 = SSS0 is true in all models of arithmetic” sounds like “0 = S0″ or “garble asdf qwerty sputz” to me. It just doesn’t make any sense.
Mathematics has nothing to do with experience, only to what extent mathematics applies to reality does.
That you have certain mathematical beliefs has a lot to do with the experiences that you have had. This applies in particular to your beliefs about what the theorems of PA are.
Sorry, I edited the statement in question right before you posted that because I anticipated a similar reaction. However, you’re still wrong. It has only to do with my beliefs to what extent peano arithmetic applies to reality, which is something completely different.
Edit: Ok, you’re probably not wrong, but it rather seems we are talking about different things when we say “mathematical beliefs”. Whether peano arithmetic applies to reality is not a mathematical belief for me.
Consider the experiences that you have had while reading and thinking about proofs within PA. (The experience of devising and confirming a proof is just a particular kind of experience, after all.) Are you saying that the contents of those experiences have had nothing to do with the beliefs that you have formed about what the theorems of PA are?
Suppose that those experiences had been systematically different in a certain way. Say that you consistently made a certain kind of mistake while confirming PA proofs, so that certain proofs seemed to be valid to you that don’t seem valid to you in reality. Would you not have arrived at different beliefs about what the theorems of PA are?
That is the sense in which your beliefs about what the theorems of PA are depend on your experiences.
I’m not sure I 100% understand what you’re saying, but the question “which beliefs will I end up with if logical reasoning itself is flawed” is of little interest to me.
Yes, because if I assume that my faculty of logical reasoning is flawed, no deductions of logical reasoning I do can be considered certain, in which case everything falls: Mathematics, physics, bayesianism, you name it. It is therefore (haha! but what if my faculty of logical reasoning is flawed?) very irrational to assume this.
But you know that your faculty of logical reasoning is flawed to some extent. Humans are not perfect logicians. We manage to find use in making long chains of logical deductions even though we know that they contain mistakes with some nonzero probability.
I don’t know that. Can you prove that under the assumption you’re making?
As I see it, my faculty of logical reasoning is not flawed in any way. The only thing that’s flawed is my faculty of doing logical reasoning, i.e. I’m not always doing logical reasoning when I should be. But that’s hardly the matter here.
I would be very interested in how you can come to any conclusion under the assumption that the logical reasoning you do to come to that conclusion is flawed. If my faculty of logical reasoning is flawed, I can only say one thing with certainty, which is that my faculty of logical reasoning is flawed. Actually, I don’t think I could even say that.
Edit:
We manage to find use in making long chains of logical deductions even though we know that they contain mistakes with some nonzero probability.
I don’t consider this to be a problem of actual faculty of logical reasoning because if someone finds a logical mistake I will agree with them.
So you don’t consider mistakes in logical reasoning a problem because someone might point them out to you? What if it’s an easy mistake to make, and a lot of other people make the same mistake? At this point, it seems like you’re arguing about the definition of the words “problem with”, not about states of the world. Can you clarify what disagreement you have about states of the world?
I think the point is that mathematical reasoning is inherently self-correcting in this sense, and that this corrective force is intentionistic and Lamarckian—it is being corrected toward a mathematical argument which one thinks of as a timeless perfect Form (because come on, are there really any mathematicians who don’t, secretly, believe in the Platonic realism of mathematics?), and not just away from an argument that’s flawed.
An incorrect theory can appear to be supported by experimental results (with probability going to 0 as the sample size goes to \infty), and if you have the finite set of experimental results pointing to the wrong conclusion, then no amount of mind-internal examination of those results can correct the error (if it could, your theory would not be predictive; conservation of probability, you all know that). But mind-internal examination of a mathematical argument, without any further entangling (so no new information, in the Bayesian sense, about the outside world; only new information about the world inside your head), can discover the error, and once it has done so, it is typically a mechanical process to verify that the error is indeed an error and that the correction has indeed corrected that error.
This remains true if the error is an error of omission (We haven’t found the proof that T, so we don’t know that T, but in fact there is a proof of T).
So you’re not getting new bits from observed reality, yet you’re making new discoveries and overthrowing past mistakes. The bits are coming from the processing; your ignorance has decreased by computation without the acquisition of bits by entangling with the world. That’s why deductive knowledge is categorically different, and why errors in logical reasoning are not a problem with the idea of logical reasoning itself, nor do they exclude a mathematical statement from being unconditionally true. They just exclude the possibility of unconditional knowledge.
Can you conceive of a world in which, say, ⋀∅ is false? It’s certainly a lot harder than conceiving of a world in which earplugs obey “2+2=3”-arithmetic, but is your belief that ⋀∅ unconditional? What is the absolutely most fundamentally obvious tautology you can think of, and is your belief in it unconditional? If not, what kind of evidence could there be against it? It seems to me that ¬⋀∅ would require “there exists a false proposition which is an element of the empty set”; in order to make an error there I’d have to have made an error in looking up a definition, in which case I’m not really talking about ⋀∅ when I assert its truth; nonetheless the thing I am talking about is a tautological truth and so one still exists (I may have gained or lost a ‘box’, here, in which case things don’t work).
My mind is beginning to melt and I think I’ve drifted off topic a little. I should go to bed. (Sorry for rambling)
I guess there are my beliefs-which-predict-my-expectations and my aliefs-which-still-weird-me-out. In the sense of beliefs which predict my expectation, I would say the following about mathematics: as far as logic is concerned, I have seen (with my eyes, connected to neurons, and so on) the proof that from P&-P anything follows, and since I do want to distinguish “truth” from “falsehood”, I view it as (unless I made a mistake in the proof of P&-P->Q, which I view as highly unlikely—an easy million-to-one against) as false. Anything which leads me to P&-P, therefore, I see as false, conditional on the possibility I made a mistake in the proof (or not noticed a mistake someone else made). Since I have a proof from “2+2=3” to “2+2=3 and 2+2!=3″ (which is fairly simple, and I checked multiple times) I view 2+2=3 as equally unlikely. That’s surely entanglement with the world—I manipulated symbols written by a physical pen on a physical paper, and at each stage, the line following obeyed a relationship with the line before it. My belief that “there is some truth”, I guess, can be called unconditional—nothing I see will convince me otherwise. But I’m not even certain I can conceive of a world without truth, while I can conceive of a world, sadly, where there are mistakes in my proofs :)
You’re missing the essential point about deductives, which is this:
Changing the substrate used for the calculations does not change the experiment.
With a normal experiment, if you repeat my experiment it’s possible that your apparatus differs from mine in a way which (unbeknownst to either of us) is salient and affects the outcome.
With mathematical deduction, if our results disagree, (at least) one of us is simply wrong, it’s not “this datum is also valid but it’s data about a different set of conditions”, it’s “this datum contains an error in its derivation”. It is the same experiment, and the same computation, whether it is carried out on my brain, your brain, your brain using pen and paper as an external single-write store, theorem-prover software running on a Pentium, the same software running on an Athlon, different software in a different language running on a Babbage Analytical Engine… it’s still the same experiment. And a mistake in your proof really is a mistake, rather than the laws of mathematics having been momentarily false leading you to a false conclusion.
To quote the article, “Unconditional facts are not the same as unconditional beliefs.” Contrapositive: conditional beliefs are not the same as conditional facts.
The only way in which your calculation entangled with the world is in terms of the reliability of pen-and-paper single-write storage; that reliability is not contingent on what the true laws of mathematics are, so the bits that come from that are not bits you can usefully entangle with. The bits that you can obtain about the true laws of mathematics are bits produced by computation.
I don’t consider these mistakes to be no problem at all. What I meant to say is that the existence of these noise errors doesn’t reduce the reasonabliness of me going around and using logical reasoning to draw deductions. Which also means that if reality seems to contradict my deductions, then either there is an error within my deductions that I can theoretically find, or there is an error within the line of thought that made me doubt my deductions, for example eyes being inadequate tools for counting pebbles. To put it more generally: If I don’t find errors within my deductions, then my perception of reality is not an appropriate measure for the truth of my deductions, unless said deductions deal in any way with the applicability of other deductions on reality, or reality in general, which mathematics does not.
It’s not as if errors in perceiving reality weren’t much more numerous and harder to detect than errors in anyone’s faculty of doing logical reasoning.
It’s not as if errors in perceiving reality weren’t much more numerous and harder to detect than errors in anyone’s faculty of doing logical reasoning.
And the probability of an error in a given logical argument gets smaller as the chain of deductions gets shorter and as the number of verifications of the argument gets larger.
Nonetheless, the probability of error should never reach zero, even if the argument is as short as the proof that SS0 + SS0 = SSSS0 in PA, and even if the proof has been verified by yourself and others billions of times.
ETA: Where ever I wrote “proof” in this comment, I meant “alleged proof”. (Erm … except for in this ETA.)
The probability that there is an error within the line of thought that lets me come to the conclusion that there is an error within any theorem of peano arithmetic is always higher than the probability that there actually is an error within any theorem of peano arithmetic, since probability theory is based on peano arithmetic and if SS0 + SS0 = SSSS0 were wrong, probability theory would be at least equally wrong.
the probability that there actually is an error within any theorem of peano arithmetic.
(Emphasis added.) Where ever I wrote “proof” in the grandparent comment, I should have written “alleged proof”.
We probably agree that the idea of “an error in a theorem of PA” isn’t meaningful. But the idea that everyone was making a mistake the whole time that they thought that SS0 + SS0 = SSSS0 was a theorem of PA, while, all along, SS0 + SS0 = SSS0 was a theorem of PA — that idea is meaningful. After all, people are all the time alleging that some statement is a theorem of PA when it really isn’t. That is to say, people make arithmetic mistakes all the time.
That is true. However, if your perception of reality leads you to the thought that there might be an error with SS0 + SS0 = SSSS0, and you can’t find that error, then it is irrational to assume that there actually is an error with SS0 + SS0 = SSSS0 rather than with your perception of rationality or the concept of applying SS0 + SS0 = SSSS0 to reality.
I think so, if I understand you. But I think that you’re referring to a more restricted class of “perceptions of reality” than Eliezer is.
In the kind of scenario that Eliezer is talking about, your perceptions of reality include seeming to find an error in the alleged proof that SS0 + SS0 = SSSS0 (and confirming your perception of an error sufficiently many times to outweigh all the times when you thought you’d confirmed that the alleged proof was valid). If that is the kind of “perception of reality” that we’re talking about, then you should conclude that there was an error in the alleged proof of SS0 + SS0 = SSSS0.
That is all good and valid, and of course I don’t believe in any results of deductions with errors in them just based on said deductions. But that has nothing to do with reality. Two pebbles plus two pebbles resulting in three pebbles is not what convinces me that SS0 + SS0 = SSS0; finding the error is, which is nothing that is perceived (i.e. it is purely abstract).
If we’re defining “situation” in a way similar to how it’s used in the top-level post (pebbles and stuff), then there simply can’t exist a situation that could convince me that SS0 + SS0 = SSSS0 is wrong in peano arithmetic. It might convince me to check peano arithmetic, of course, but that’s all.
I try to not argue about definition of words, but it just seems to me that as soon as you define words like “perception”, “situation”, “believe” etcetera in a way that would result in a situation capable of convincing me that SS0 + SS0 = SSS0 is true in peano arithmetic, we are not talking about reality anymore.
Okay, I just thought of a possible situation that would indeed “convince” me of 2 + 2 = 3: Disable the module of my brain responsible for logical reasoning, then show me some stage magic involving pebbles or earplugs, and then my poor rationalization module would probably end up with some explanation along the lines of 2 + 2 = 3.
As I see it, my faculty of logical reasoning is not flawed in any way. The only thing that’s flawed is my faculty of doing logical reasoning, i.e. I’m not always doing logical reasoning when I should be.
Sorry for not being clear. By “faculty of logical reasoning”, I mean nothing other than “faculty of doing logical reasoning”.
And another thing: It might be possible that if peano arithmetic didn’t apply to reality I wouldn’t have any beliefs about peano arithmetic because I might not even think of it. However there is no way I could establish the peano axioms and then believe that SS0 + SS0 = SSS0 is true within peano arithmetic. It’s just not possible.
SS0 isn’t a free variable like “x”, it is, in any given model of arithmetic, the unique object related by the successor relation to the unique object related by the successor relation to the unique object which is not related by the successor relation to any object, which is how mathematicians say “Two”.
I am quite familiar with TNT. However either you are talking about models of arithmetic based on peano axioms, in which case e.g. SS0 + SS0 = SSS0 simply cannot be true, for it contradicts these axioms and if both the peano axioms and said equation were true, you wouldn’t have a model of arithmetics; or (what I’m assuming) you are actually talking about non-peano arithmetics, in which case there is no compelling reason why any equation of this kind should generally be true anyway.
On another note, it seems that bayesianism is heavily based on peano arithmetic, so refuting peano arithmetic by means of bayesianism seems like refuting bayesianism rather than refuting peano arithmetic, at least to me.
Exactly. This is one of Eliezer’s few genuine philosophical mistakes, one which, four years later, he’s still making.
I know very well the difference between a collection of axioms and a collection of models of which those axioms are true, thank you.
A lot of people seem to have trouble imagining what it means to consider the hypothesis that SS0+SS0 = SSS0 is true in all models of arithmetic, for purposes of deriving predictions which distinguish it from what we should see given the alternative hypothesis that SS0+SS0=SSSS0 is true in all models of arithmetic, thereby allowing internal or external experience to advise you on which of these alternative hypotheses is true.
Then why do you persist in saying things like “I don’t believe in [Axiom X]/[Mathematical Object Y]”? If this distinction that you are so aptly able to rehearse were truly integrated into your understanding, it wouldn’t occur to you to discuss whether you have “seen” a particular cardinal number.
I understand the point you wanted to make in this post, and it’s a valid one. All the same, it’s extremely easy to slip from empiricism to Platonism when discussing mathematics, and parts of this post can indeed be read as betraying that slip (to which you have explicitly fallen victim on other occasions, the most recent being the thread I linked to).
I don’t think people really understood what I was talking about in that thread. I would have to write a sequence about
the difference between first-order and second-order logic
why the Lowenheim-Skolem theorems show that you can talk about integers or reals in higher-order logic but not first-order logic
why third-order logic isn’t qualitatively different from second-order logic in the same way that second-order logic is qualitatively above first-order logic
the generalization of Solomonoff induction to anthropic reasoning about agents resembling yourself who appear embedded in models of second-order theories, with more compact axiom sets being more probable a priori
how that addresses some points Wei Dai has made about hypercomputation not being conceivable to agents using Solomonoff induction on computable Cartesian environments, as well as formalizing some of the questions we argue about in anthropic theory
why seeing apparently infinite time and apparently continuous space suggests, to an agent using second-order anthropic induction, that we might be living within a model of axioms that imply infinity and continuity
why believing that things like a first uncountable ordinal can contain reality-fluid in the same way as the wavefunction, or even be uniquely specified by second-order axioms that pin down a single model up to isomorphism the way that second-order axioms can pin down integerness and realness, is something we have rather less evidence for, on the surface of things, than we have evidence favoring the physical existability of models of infinity and continuity, or the mathematical sensibility of talking about the integers or real numbers.
I would like very very much to read that sequence. Might it be written at some point?
Everything sounded perfectly good until the last bullet:
ERROR: CATEGORY. “Wavefunction” is not a mathematical term, it is a physical term. It’s a name you give to a mathematical object when it is being used to model the physical world in a particular way, in the specific context of that modeling-task. The actual mathematical object being used as the wavefunction has a mathematical existence totally apart from its physical application, and that mathematical existence is of the exact same nature as that of the first uncountable ordinal; the (mathematical) wavefunction does not gain any “ontological bonus points” for its role in physics.
Pinning down a single model up to isomorphism might be a nice property for a set of axioms to have, but it is not “reality-conferring”: there are two groups of order 4 up to isomorphism, while there is only one of order 3; yet that does not make “group of order 3“ a “more real” mathematical object than “group of order 4”.
Lowenheim-Skolem, maybe?
But that does not imply that you can’t talk about integers or reals in first order logic. And indeed you can talk about integers and real numbers using first-order logic, people do so all the time.
Only in the same sense that you can talk about kittens by saying “Those furry things!” There’ll always be some ambiguity over whether you’re talking about kittens or lions, even though kittens are in fact furry and have all the properties that you can deduce to hold true of furry things.
Yes, and that’s OK. I suspect you can’t do qualitatively better than that (viz ambient set-theoretic universe for second-order logic), but it’s still possible (necessary?) to work under this apparent lack of absolute control over what it is you are dealing with. Even though (first order) PA doesn’t know what “integers” are, it’s still true that the statements it believes valid are true for “integers”, it’s useful that way (just as AIs or humans are useful for making the world better). It is a device that perceives some of the properties of the object we study, but not all, not enough to rebuild it completely. (Other devices can form similarly imperfect pictures of the object of study and its relationship with the device perceiving it, or of themselves perceiving this process, or of the object of study being affected by behavior of some of these devices.)
Likewise, we may fail to account for all worlds that we might be affecting by our decisions, but we mostly care about (or maybe rather have non-negligible consequentialist control over) “real world” (or worlds), whatever this is, and it’s true that our conclusions capture some truth about this “real world”, even if it’s genuinely impossible for us to ever know completely what it is. (We of course “know” plenty more than was ever understood, and it’s a big question how to communicate to a FAI what we do know.)
Not in the same sense at all. All of the numbers that you have ever physically encountered were nameable, definable, computable. Moreover they came to you with algorithms for verifying that one of them was equal to another.
In other words, a first uncountable ordinal may be perfectly good math, but it’s not physics?
I don’t believe it’s good math until it becomes possible to talk about the first uncountable ordinal, in the way that you can talk about the integers. Any first-order theory of the integers, like first-order PA, will have some models containing supernatural numbers, but there are many different sorts of models of supernatural numbers, you couldn’t talk about the supernaturals the way you can talk about 3 or the natural numbers. My skepticism about “the first uncountable ordinal” is that there would not exist any canonicalizable mathematical object—nothing you could ever pin down uniquely—that would ever contain the first uncountable ordinal inside it, because of the indefinitely extensible character of well-ordering. This is a sort of skepticism of Platonic existence—when that which you thought you wanted to talk about can never be pinned down even in second-order logic, nor in any other language which does not permit of paradox.
You seem to keep forgetting that the whole notion of “second-order logic” does not make sense without some ambient set theory. (Unless I am greatly misunderstanding how second-order logic works?) And if you have that, then you can pin down the natural numbers (and the first uncountable ordinal) in first-order terms in this larger theory.
Only to the same degree that first-order logic requires an ambient group of models (not necessarily sets) to make sense. It’s just that the ambient models in the second-order theory include collections of possible predicates of any objects that get predicates attached, or if you prefer, people who speak in second-order logic think that it makes as much sense to say “all possible collections that include some objects and exclude others, but still include and exclude only individual objects” as “all objects”.
Well, it makes sense to me without any models. I can compute, prove theorems, verify proofs of theorems and so on happily without ever producing a “model” for the natural numbers in toto, whatever that could mean.
Hmmm…
::goes and learns some more math from Wikipedia::
Okay… I now know what an ordinal number actually is. And I’m trying to make more sense out of your comment...
So, re-reading this:
So if I understand you correctly, you don’t trust anything that can’t be defined up to isomorphism in second-order logic, and “the set of all countable ordinals” is one of those things?
(I never learned second order logic in college...)
Hmm, funny you should treat “I don’t believe in [Mathematical Object Y]” as Platonism. I generally characterise my ‘syntacticism’ (wh. I intend to explain more fully when I understand it the hell myself) as a “Platonic Formalism”; it is promiscuously inclusive of Mathematical Objects. If you can formulate a set of behaviours (inference rules) for it, then it has an existing Form—and that Form is the formalism (or… syntax) that encapsulates its behaviour. So in a sense, uncountable cardinals don’t exist—but the theory of uncountable cardinals does exist; similarly, the theory of finite cardinals exists but the number ‘2’ doesn’t.
This is of course bass-ackwards from a map-territory perspective; I am claiming that the map exists and the territory is just something we naïvely suppose ought to exist. After all, a map of non-existent territory is observationally equivalent to a map of manifest reality; unless you can observe the actual territory you can’t distinguish the two. Taking as assumption that the observe() function always returns an object Map, the idea that there is a territory gets Occamed out.
There is a good reason why I should want to do something so ontologically bizarre: by removing referents, and semantics, and manifest reality; by retaining only syntax, and rejecting the suggestion that one logic really “models” another, we finally solve the problems of Gödel (I’m a mathematician, not a philosopher, so I’m allowed to invoke Gödel without losing automatically) and the infinite descent when we say “first order logic is consistent because second-order logic proves it so, and we can believe second-order logic because third-order logic proves it consistent, and...”. When all you are doing is playing symbol games stripped of any semantics, “P ∧ ¬P” is just a string, and who cares if you can derive it from your axiomata? It only stops being a string when you apply your symbol games to what you unknowingly label as “manifest reality”, when you (essentially) claim that symbol game A (Peano arithmetic) models symbol game B (that part of the Physics game that deals with the objects you’ve identified as pebbles).
Platonism is not a means of excluding a mathematical object because it’s not one of the Forms; it is a means of allowing any mathematical object to have a Form whether you like it or not. I don’t believe in God, but there still exists a Form for a mathematical object that looks a lot like “a universe in which God exists”. It’s just that conceiving of a possible world only makes an arrow from your world to its, not an arrow in the reverse direction, hence why “A perfect God would have the quality of existence” is such a laughable non-starter :) What if someone broke out of a hypothetical situation in your room right now?
As a fellow mathematician, I want to point out that it doesn’t mean you win automatically, either. Just look at Voevodsky’s recent FOM talk at the IAS.
Well, of course I don’t win automatically. It’s just that there’s a kind of Godwin’s Law of philosophy, whereby the first to invoke Gödel loses by default.
I, at least, was not suggesting that you don’t know the difference, merely that your article failed to take account of the difference and was therefore confusing and initially unconvincing to me because I was taking account of that difference.
However (and it took me too damn long to realise this; I can’t wait for Logic and Set Theory this coming year), I wasn’t talking about “models” in the sense that pebbles are a Model of the Theory PA. I was talking in the sense that PA is a model of the behaviour observed in pebbles. If PA fails to model pebbles, that doesn’t mean PA is wrong, it just means that pebbles don’t follow PA. If a Model of PA exists in which SS0+SS0 = SSS0, then the Theory PA materially cannot prove that SS0+SS0 ≠ SSS0, and if such a proof has been constructed from the axiomata of the Theory then either the proof is in error (exists a step not justified by the inference rules), or the combination of axiomata and inference rules contains a contradiction (which can be rephrased as “under these inference rules, the Theory is not consistent”), or the claimed Model is not in fact a Model at all (in which case one of the axiomata does not, in fact, apply to it).
I should probably write down what I think I know about the epistemic status of mathematics and why I think I know it, because I’m pretty sure I disagree quite strongly with you (and my prior probability of me being right and you being wrong is rather low).
Scientists and mathematicians use the word “model” in exactly opposite ways. This is occasionally confusing.
Reading your essay I wondered whether it would have been more effective if you had chosen bigger numbers than 2, 2, and 3. e.g. “How to convince me that 67+41 = 112.”
That would have been a damn nuisance, because throughout the rest of this comment thread we’d have been writing unhelpfully long strings of Ss. ;)
I was proud of this comment and I comfort myself with your explanation for why it got the response it did.
Maybe I’m misinterpreting you, but could you explain how any non-symmetric equation can possibly be true in all models of arithmetic?
The purpose of the article is only to describe some subjective experiences that would cause you to conclude that SS0+SS0 = SSS0 is true in all models of arithmetic. But Eliezer can only describe certain properties that those subjective experiences would have. He can’t make you have the experiences themselves.
So, for example, he could say that one such experience would conform to the following description: “You count up all the S’s on one side of the equation, and you count up all the S’s on the other side of the equation, and you find yourself getting the same answer again and again. You show the equation to other people, and they get the same answer again and again. You build a computer from scratch to count the S’s on both sides, and it says that there are the same number again and again.”
Such a description gives some features of an experience. The description provides a test that you could apply to any given experience and answer the question “Does this experience satisfy this description or not?” But the description is not like one in a novel, which, ideally, would induce you to have the experience, at least in your imagination. That is a separate and additional task beyond what this post set out to accomplish.
Yes, I am aware of that. However, I don’t think two pebbles on the table plus another two pebbles on the table resulting in three pebbles on the table could cause anyone sane to conclude that SS0 + SS0 = SSS0 is true in all models of arithmetic. In order to be convinced of that, you would have to assign “PA doesn’t apply to pebbles” a lower prior probability than “PA is wrong”.
The statement “PA applies to pebbles” (or anything else for that matter) doesn’t follow of the peano axioms in any way and is therefore not part of peano arithmetic. So what if peano arithmetic doesn’t apply to pebbles, there are other arithmetics that don’t either, and that doesn’t make them any wrong. You’re using them everyday in situations where they do apply.
A mathematical theory doesn’t consist of beliefs that are based on evidence; it’s an axiomatic system. There is no way any real-life situation could convince me that PA is false. Saying “SS0 + SS0 = SSS0 is true in all models of arithmetic” sounds like “0 = S0″ or “garble asdf qwerty sputz” to me. It just doesn’t make any sense.
Mathematics has nothing to do with experience, only to what extent mathematics applies to reality does.
That you have certain mathematical beliefs has a lot to do with the experiences that you have had. This applies in particular to your beliefs about what the theorems of PA are.
Sorry, I edited the statement in question right before you posted that because I anticipated a similar reaction. However, you’re still wrong. It has only to do with my beliefs to what extent peano arithmetic applies to reality, which is something completely different.
Edit: Ok, you’re probably not wrong, but it rather seems we are talking about different things when we say “mathematical beliefs”. Whether peano arithmetic applies to reality is not a mathematical belief for me.
Consider the experiences that you have had while reading and thinking about proofs within PA. (The experience of devising and confirming a proof is just a particular kind of experience, after all.) Are you saying that the contents of those experiences have had nothing to do with the beliefs that you have formed about what the theorems of PA are?
Suppose that those experiences had been systematically different in a certain way. Say that you consistently made a certain kind of mistake while confirming PA proofs, so that certain proofs seemed to be valid to you that don’t seem valid to you in reality. Would you not have arrived at different beliefs about what the theorems of PA are?
That is the sense in which your beliefs about what the theorems of PA are depend on your experiences.
I’m not sure I 100% understand what you’re saying, but the question “which beliefs will I end up with if logical reasoning itself is flawed” is of little interest to me.
Is the question “Which beliefs will I end up with if my faculty of logical reasoning is flawed” also of little interest to you?
Yes, because if I assume that my faculty of logical reasoning is flawed, no deductions of logical reasoning I do can be considered certain, in which case everything falls: Mathematics, physics, bayesianism, you name it. It is therefore (haha! but what if my faculty of logical reasoning is flawed?) very irrational to assume this.
But you know that your faculty of logical reasoning is flawed to some extent. Humans are not perfect logicians. We manage to find use in making long chains of logical deductions even though we know that they contain mistakes with some nonzero probability.
I don’t know that. Can you prove that under the assumption you’re making?
As I see it, my faculty of logical reasoning is not flawed in any way. The only thing that’s flawed is my faculty of doing logical reasoning, i.e. I’m not always doing logical reasoning when I should be. But that’s hardly the matter here.
I would be very interested in how you can come to any conclusion under the assumption that the logical reasoning you do to come to that conclusion is flawed. If my faculty of logical reasoning is flawed, I can only say one thing with certainty, which is that my faculty of logical reasoning is flawed. Actually, I don’t think I could even say that.
Edit:
I don’t consider this to be a problem of actual faculty of logical reasoning because if someone finds a logical mistake I will agree with them.
So you don’t consider mistakes in logical reasoning a problem because someone might point them out to you? What if it’s an easy mistake to make, and a lot of other people make the same mistake? At this point, it seems like you’re arguing about the definition of the words “problem with”, not about states of the world. Can you clarify what disagreement you have about states of the world?
I think the point is that mathematical reasoning is inherently self-correcting in this sense, and that this corrective force is intentionistic and Lamarckian—it is being corrected toward a mathematical argument which one thinks of as a timeless perfect Form (because come on, are there really any mathematicians who don’t, secretly, believe in the Platonic realism of mathematics?), and not just away from an argument that’s flawed.
An incorrect theory can appear to be supported by experimental results (with probability going to 0 as the sample size goes to \infty), and if you have the finite set of experimental results pointing to the wrong conclusion, then no amount of mind-internal examination of those results can correct the error (if it could, your theory would not be predictive; conservation of probability, you all know that). But mind-internal examination of a mathematical argument, without any further entangling (so no new information, in the Bayesian sense, about the outside world; only new information about the world inside your head), can discover the error, and once it has done so, it is typically a mechanical process to verify that the error is indeed an error and that the correction has indeed corrected that error.
This remains true if the error is an error of omission (We haven’t found the proof that T, so we don’t know that T, but in fact there is a proof of T).
So you’re not getting new bits from observed reality, yet you’re making new discoveries and overthrowing past mistakes. The bits are coming from the processing; your ignorance has decreased by computation without the acquisition of bits by entangling with the world. That’s why deductive knowledge is categorically different, and why errors in logical reasoning are not a problem with the idea of logical reasoning itself, nor do they exclude a mathematical statement from being unconditionally true. They just exclude the possibility of unconditional knowledge.
Can you conceive of a world in which, say, ⋀∅ is false? It’s certainly a lot harder than conceiving of a world in which earplugs obey “2+2=3”-arithmetic, but is your belief that ⋀∅ unconditional? What is the absolutely most fundamentally obvious tautology you can think of, and is your belief in it unconditional? If not, what kind of evidence could there be against it? It seems to me that ¬⋀∅ would require “there exists a false proposition which is an element of the empty set”; in order to make an error there I’d have to have made an error in looking up a definition, in which case I’m not really talking about ⋀∅ when I assert its truth; nonetheless the thing I am talking about is a tautological truth and so one still exists (I may have gained or lost a ‘box’, here, in which case things don’t work).
My mind is beginning to melt and I think I’ve drifted off topic a little. I should go to bed. (Sorry for rambling)
I guess there are my beliefs-which-predict-my-expectations and my aliefs-which-still-weird-me-out. In the sense of beliefs which predict my expectation, I would say the following about mathematics: as far as logic is concerned, I have seen (with my eyes, connected to neurons, and so on) the proof that from P&-P anything follows, and since I do want to distinguish “truth” from “falsehood”, I view it as (unless I made a mistake in the proof of P&-P->Q, which I view as highly unlikely—an easy million-to-one against) as false. Anything which leads me to P&-P, therefore, I see as false, conditional on the possibility I made a mistake in the proof (or not noticed a mistake someone else made). Since I have a proof from “2+2=3” to “2+2=3 and 2+2!=3″ (which is fairly simple, and I checked multiple times) I view 2+2=3 as equally unlikely. That’s surely entanglement with the world—I manipulated symbols written by a physical pen on a physical paper, and at each stage, the line following obeyed a relationship with the line before it. My belief that “there is some truth”, I guess, can be called unconditional—nothing I see will convince me otherwise. But I’m not even certain I can conceive of a world without truth, while I can conceive of a world, sadly, where there are mistakes in my proofs :)
You’re missing the essential point about deductives, which is this:
Changing the substrate used for the calculations does not change the experiment.
With a normal experiment, if you repeat my experiment it’s possible that your apparatus differs from mine in a way which (unbeknownst to either of us) is salient and affects the outcome.
With mathematical deduction, if our results disagree, (at least) one of us is simply wrong, it’s not “this datum is also valid but it’s data about a different set of conditions”, it’s “this datum contains an error in its derivation”. It is the same experiment, and the same computation, whether it is carried out on my brain, your brain, your brain using pen and paper as an external single-write store, theorem-prover software running on a Pentium, the same software running on an Athlon, different software in a different language running on a Babbage Analytical Engine… it’s still the same experiment. And a mistake in your proof really is a mistake, rather than the laws of mathematics having been momentarily false leading you to a false conclusion. To quote the article, “Unconditional facts are not the same as unconditional beliefs.” Contrapositive: conditional beliefs are not the same as conditional facts.
The only way in which your calculation entangled with the world is in terms of the reliability of pen-and-paper single-write storage; that reliability is not contingent on what the true laws of mathematics are, so the bits that come from that are not bits you can usefully entangle with. The bits that you can obtain about the true laws of mathematics are bits produced by computation.
I don’t consider these mistakes to be no problem at all. What I meant to say is that the existence of these noise errors doesn’t reduce the reasonabliness of me going around and using logical reasoning to draw deductions. Which also means that if reality seems to contradict my deductions, then either there is an error within my deductions that I can theoretically find, or there is an error within the line of thought that made me doubt my deductions, for example eyes being inadequate tools for counting pebbles. To put it more generally: If I don’t find errors within my deductions, then my perception of reality is not an appropriate measure for the truth of my deductions, unless said deductions deal in any way with the applicability of other deductions on reality, or reality in general, which mathematics does not.
It’s not as if errors in perceiving reality weren’t much more numerous and harder to detect than errors in anyone’s faculty of doing logical reasoning.
And the probability of an error in a given logical argument gets smaller as the chain of deductions gets shorter and as the number of verifications of the argument gets larger.
Nonetheless, the probability of error should never reach zero, even if the argument is as short as the proof that SS0 + SS0 = SSSS0 in PA, and even if the proof has been verified by yourself and others billions of times.
ETA: Where ever I wrote “proof” in this comment, I meant “alleged proof”. (Erm … except for in this ETA.)
The probability that there is an error within the line of thought that lets me come to the conclusion that there is an error within any theorem of peano arithmetic is always higher than the probability that there actually is an error within any theorem of peano arithmetic, since probability theory is based on peano arithmetic and if SS0 + SS0 = SSSS0 were wrong, probability theory would be at least equally wrong.
(Emphasis added.) Where ever I wrote “proof” in the grandparent comment, I should have written “alleged proof”.
We probably agree that the idea of “an error in a theorem of PA” isn’t meaningful. But the idea that everyone was making a mistake the whole time that they thought that SS0 + SS0 = SSSS0 was a theorem of PA, while, all along, SS0 + SS0 = SSS0 was a theorem of PA — that idea is meaningful. After all, people are all the time alleging that some statement is a theorem of PA when it really isn’t. That is to say, people make arithmetic mistakes all the time.
That is true. However, if your perception of reality leads you to the thought that there might be an error with SS0 + SS0 = SSSS0, and you can’t find that error, then it is irrational to assume that there actually is an error with SS0 + SS0 = SSSS0 rather than with your perception of rationality or the concept of applying SS0 + SS0 = SSSS0 to reality.
Can we agree on that?
I think so, if I understand you. But I think that you’re referring to a more restricted class of “perceptions of reality” than Eliezer is.
In the kind of scenario that Eliezer is talking about, your perceptions of reality include seeming to find an error in the alleged proof that SS0 + SS0 = SSSS0 (and confirming your perception of an error sufficiently many times to outweigh all the times when you thought you’d confirmed that the alleged proof was valid). If that is the kind of “perception of reality” that we’re talking about, then you should conclude that there was an error in the alleged proof of SS0 + SS0 = SSSS0.
That is all good and valid, and of course I don’t believe in any results of deductions with errors in them just based on said deductions. But that has nothing to do with reality. Two pebbles plus two pebbles resulting in three pebbles is not what convinces me that SS0 + SS0 = SSS0; finding the error is, which is nothing that is perceived (i.e. it is purely abstract).
If we’re defining “situation” in a way similar to how it’s used in the top-level post (pebbles and stuff), then there simply can’t exist a situation that could convince me that SS0 + SS0 = SSSS0 is wrong in peano arithmetic. It might convince me to check peano arithmetic, of course, but that’s all.
I try to not argue about definition of words, but it just seems to me that as soon as you define words like “perception”, “situation”, “believe” etcetera in a way that would result in a situation capable of convincing me that SS0 + SS0 = SSS0 is true in peano arithmetic, we are not talking about reality anymore.
Okay, I just thought of a possible situation that would indeed “convince” me of 2 + 2 = 3: Disable the module of my brain responsible for logical reasoning, then show me some stage magic involving pebbles or earplugs, and then my poor rationalization module would probably end up with some explanation along the lines of 2 + 2 = 3.
But let’s not go there.
Sorry for not being clear. By “faculty of logical reasoning”, I mean nothing other than “faculty of doing logical reasoning”.
In that case I have probably answered your original question here.
And another thing: It might be possible that if peano arithmetic didn’t apply to reality I wouldn’t have any beliefs about peano arithmetic because I might not even think of it. However there is no way I could establish the peano axioms and then believe that SS0 + SS0 = SSS0 is true within peano arithmetic. It’s just not possible.
SS0 isn’t a free variable like “x”, it is, in any given model of arithmetic, the unique object related by the successor relation to the unique object related by the successor relation to the unique object which is not related by the successor relation to any object, which is how mathematicians say “Two”.
Although as a mathmo myself I should point out that, to save time, we usually pronounce it “Two”. :)
I am quite familiar with TNT. However either you are talking about models of arithmetic based on peano axioms, in which case e.g. SS0 + SS0 = SSS0 simply cannot be true, for it contradicts these axioms and if both the peano axioms and said equation were true, you wouldn’t have a model of arithmetics; or (what I’m assuming) you are actually talking about non-peano arithmetics, in which case there is no compelling reason why any equation of this kind should generally be true anyway.
On another note, it seems that bayesianism is heavily based on peano arithmetic, so refuting peano arithmetic by means of bayesianism seems like refuting bayesianism rather than refuting peano arithmetic, at least to me.