Maybe I’m misinterpreting you, but could you explain how any non-symmetric equation can possibly be true in all models of arithmetic?
The purpose of the article is only to describe some subjective experiences that would cause you to conclude that SS0+SS0 = SSS0 is true in all models of arithmetic. But Eliezer can only describe certain properties that those subjective experiences would have. He can’t make you have the experiences themselves.
So, for example, he could say that one such experience would conform to the following description: “You count up all the S’s on one side of the equation, and you count up all the S’s on the other side of the equation, and you find yourself getting the same answer again and again. You show the equation to other people, and they get the same answer again and again. You build a computer from scratch to count the S’s on both sides, and it says that there are the same number again and again.”
Such a description gives some features of an experience. The description provides a test that you could apply to any given experience and answer the question “Does this experience satisfy this description or not?” But the description is not like one in a novel, which, ideally, would induce you to have the experience, at least in your imagination. That is a separate and additional task beyond what this post set out to accomplish.
Yes, I am aware of that. However, I don’t think two pebbles on the table plus another two pebbles on the table resulting in three pebbles on the table could cause anyone sane to conclude that SS0 + SS0 = SSS0 is true in all models of arithmetic. In order to be convinced of that, you would have to assign “PA doesn’t apply to pebbles” a lower prior probability than “PA is wrong”.
The statement “PA applies to pebbles” (or anything else for that matter) doesn’t follow of the peano axioms in any way and is therefore not part of peano arithmetic. So what if peano arithmetic doesn’t apply to pebbles, there are other arithmetics that don’t either, and that doesn’t make them any wrong. You’re using them everyday in situations where they do apply.
A mathematical theory doesn’t consist of beliefs that are based on evidence; it’s an axiomatic system. There is no way any real-life situation could convince me that PA is false. Saying “SS0 + SS0 = SSS0 is true in all models of arithmetic” sounds like “0 = S0″ or “garble asdf qwerty sputz” to me. It just doesn’t make any sense.
Mathematics has nothing to do with experience, only to what extent mathematics applies to reality does.
That you have certain mathematical beliefs has a lot to do with the experiences that you have had. This applies in particular to your beliefs about what the theorems of PA are.
Sorry, I edited the statement in question right before you posted that because I anticipated a similar reaction. However, you’re still wrong. It has only to do with my beliefs to what extent peano arithmetic applies to reality, which is something completely different.
Edit: Ok, you’re probably not wrong, but it rather seems we are talking about different things when we say “mathematical beliefs”. Whether peano arithmetic applies to reality is not a mathematical belief for me.
Consider the experiences that you have had while reading and thinking about proofs within PA. (The experience of devising and confirming a proof is just a particular kind of experience, after all.) Are you saying that the contents of those experiences have had nothing to do with the beliefs that you have formed about what the theorems of PA are?
Suppose that those experiences had been systematically different in a certain way. Say that you consistently made a certain kind of mistake while confirming PA proofs, so that certain proofs seemed to be valid to you that don’t seem valid to you in reality. Would you not have arrived at different beliefs about what the theorems of PA are?
That is the sense in which your beliefs about what the theorems of PA are depend on your experiences.
I’m not sure I 100% understand what you’re saying, but the question “which beliefs will I end up with if logical reasoning itself is flawed” is of little interest to me.
Yes, because if I assume that my faculty of logical reasoning is flawed, no deductions of logical reasoning I do can be considered certain, in which case everything falls: Mathematics, physics, bayesianism, you name it. It is therefore (haha! but what if my faculty of logical reasoning is flawed?) very irrational to assume this.
But you know that your faculty of logical reasoning is flawed to some extent. Humans are not perfect logicians. We manage to find use in making long chains of logical deductions even though we know that they contain mistakes with some nonzero probability.
I don’t know that. Can you prove that under the assumption you’re making?
As I see it, my faculty of logical reasoning is not flawed in any way. The only thing that’s flawed is my faculty of doing logical reasoning, i.e. I’m not always doing logical reasoning when I should be. But that’s hardly the matter here.
I would be very interested in how you can come to any conclusion under the assumption that the logical reasoning you do to come to that conclusion is flawed. If my faculty of logical reasoning is flawed, I can only say one thing with certainty, which is that my faculty of logical reasoning is flawed. Actually, I don’t think I could even say that.
Edit:
We manage to find use in making long chains of logical deductions even though we know that they contain mistakes with some nonzero probability.
I don’t consider this to be a problem of actual faculty of logical reasoning because if someone finds a logical mistake I will agree with them.
So you don’t consider mistakes in logical reasoning a problem because someone might point them out to you? What if it’s an easy mistake to make, and a lot of other people make the same mistake? At this point, it seems like you’re arguing about the definition of the words “problem with”, not about states of the world. Can you clarify what disagreement you have about states of the world?
I think the point is that mathematical reasoning is inherently self-correcting in this sense, and that this corrective force is intentionistic and Lamarckian—it is being corrected toward a mathematical argument which one thinks of as a timeless perfect Form (because come on, are there really any mathematicians who don’t, secretly, believe in the Platonic realism of mathematics?), and not just away from an argument that’s flawed.
An incorrect theory can appear to be supported by experimental results (with probability going to 0 as the sample size goes to \infty), and if you have the finite set of experimental results pointing to the wrong conclusion, then no amount of mind-internal examination of those results can correct the error (if it could, your theory would not be predictive; conservation of probability, you all know that). But mind-internal examination of a mathematical argument, without any further entangling (so no new information, in the Bayesian sense, about the outside world; only new information about the world inside your head), can discover the error, and once it has done so, it is typically a mechanical process to verify that the error is indeed an error and that the correction has indeed corrected that error.
This remains true if the error is an error of omission (We haven’t found the proof that T, so we don’t know that T, but in fact there is a proof of T).
So you’re not getting new bits from observed reality, yet you’re making new discoveries and overthrowing past mistakes. The bits are coming from the processing; your ignorance has decreased by computation without the acquisition of bits by entangling with the world. That’s why deductive knowledge is categorically different, and why errors in logical reasoning are not a problem with the idea of logical reasoning itself, nor do they exclude a mathematical statement from being unconditionally true. They just exclude the possibility of unconditional knowledge.
Can you conceive of a world in which, say, ⋀∅ is false? It’s certainly a lot harder than conceiving of a world in which earplugs obey “2+2=3”-arithmetic, but is your belief that ⋀∅ unconditional? What is the absolutely most fundamentally obvious tautology you can think of, and is your belief in it unconditional? If not, what kind of evidence could there be against it? It seems to me that ¬⋀∅ would require “there exists a false proposition which is an element of the empty set”; in order to make an error there I’d have to have made an error in looking up a definition, in which case I’m not really talking about ⋀∅ when I assert its truth; nonetheless the thing I am talking about is a tautological truth and so one still exists (I may have gained or lost a ‘box’, here, in which case things don’t work).
My mind is beginning to melt and I think I’ve drifted off topic a little. I should go to bed. (Sorry for rambling)
I guess there are my beliefs-which-predict-my-expectations and my aliefs-which-still-weird-me-out. In the sense of beliefs which predict my expectation, I would say the following about mathematics: as far as logic is concerned, I have seen (with my eyes, connected to neurons, and so on) the proof that from P&-P anything follows, and since I do want to distinguish “truth” from “falsehood”, I view it as (unless I made a mistake in the proof of P&-P->Q, which I view as highly unlikely—an easy million-to-one against) as false. Anything which leads me to P&-P, therefore, I see as false, conditional on the possibility I made a mistake in the proof (or not noticed a mistake someone else made). Since I have a proof from “2+2=3” to “2+2=3 and 2+2!=3″ (which is fairly simple, and I checked multiple times) I view 2+2=3 as equally unlikely. That’s surely entanglement with the world—I manipulated symbols written by a physical pen on a physical paper, and at each stage, the line following obeyed a relationship with the line before it. My belief that “there is some truth”, I guess, can be called unconditional—nothing I see will convince me otherwise. But I’m not even certain I can conceive of a world without truth, while I can conceive of a world, sadly, where there are mistakes in my proofs :)
You’re missing the essential point about deductives, which is this:
Changing the substrate used for the calculations does not change the experiment.
With a normal experiment, if you repeat my experiment it’s possible that your apparatus differs from mine in a way which (unbeknownst to either of us) is salient and affects the outcome.
With mathematical deduction, if our results disagree, (at least) one of us is simply wrong, it’s not “this datum is also valid but it’s data about a different set of conditions”, it’s “this datum contains an error in its derivation”. It is the same experiment, and the same computation, whether it is carried out on my brain, your brain, your brain using pen and paper as an external single-write store, theorem-prover software running on a Pentium, the same software running on an Athlon, different software in a different language running on a Babbage Analytical Engine… it’s still the same experiment. And a mistake in your proof really is a mistake, rather than the laws of mathematics having been momentarily false leading you to a false conclusion.
To quote the article, “Unconditional facts are not the same as unconditional beliefs.” Contrapositive: conditional beliefs are not the same as conditional facts.
The only way in which your calculation entangled with the world is in terms of the reliability of pen-and-paper single-write storage; that reliability is not contingent on what the true laws of mathematics are, so the bits that come from that are not bits you can usefully entangle with. The bits that you can obtain about the true laws of mathematics are bits produced by computation.
I don’t consider these mistakes to be no problem at all. What I meant to say is that the existence of these noise errors doesn’t reduce the reasonabliness of me going around and using logical reasoning to draw deductions. Which also means that if reality seems to contradict my deductions, then either there is an error within my deductions that I can theoretically find, or there is an error within the line of thought that made me doubt my deductions, for example eyes being inadequate tools for counting pebbles. To put it more generally: If I don’t find errors within my deductions, then my perception of reality is not an appropriate measure for the truth of my deductions, unless said deductions deal in any way with the applicability of other deductions on reality, or reality in general, which mathematics does not.
It’s not as if errors in perceiving reality weren’t much more numerous and harder to detect than errors in anyone’s faculty of doing logical reasoning.
It’s not as if errors in perceiving reality weren’t much more numerous and harder to detect than errors in anyone’s faculty of doing logical reasoning.
And the probability of an error in a given logical argument gets smaller as the chain of deductions gets shorter and as the number of verifications of the argument gets larger.
Nonetheless, the probability of error should never reach zero, even if the argument is as short as the proof that SS0 + SS0 = SSSS0 in PA, and even if the proof has been verified by yourself and others billions of times.
ETA: Where ever I wrote “proof” in this comment, I meant “alleged proof”. (Erm … except for in this ETA.)
The probability that there is an error within the line of thought that lets me come to the conclusion that there is an error within any theorem of peano arithmetic is always higher than the probability that there actually is an error within any theorem of peano arithmetic, since probability theory is based on peano arithmetic and if SS0 + SS0 = SSSS0 were wrong, probability theory would be at least equally wrong.
the probability that there actually is an error within any theorem of peano arithmetic.
(Emphasis added.) Where ever I wrote “proof” in the grandparent comment, I should have written “alleged proof”.
We probably agree that the idea of “an error in a theorem of PA” isn’t meaningful. But the idea that everyone was making a mistake the whole time that they thought that SS0 + SS0 = SSSS0 was a theorem of PA, while, all along, SS0 + SS0 = SSS0 was a theorem of PA — that idea is meaningful. After all, people are all the time alleging that some statement is a theorem of PA when it really isn’t. That is to say, people make arithmetic mistakes all the time.
That is true. However, if your perception of reality leads you to the thought that there might be an error with SS0 + SS0 = SSSS0, and you can’t find that error, then it is irrational to assume that there actually is an error with SS0 + SS0 = SSSS0 rather than with your perception of rationality or the concept of applying SS0 + SS0 = SSSS0 to reality.
I think so, if I understand you. But I think that you’re referring to a more restricted class of “perceptions of reality” than Eliezer is.
In the kind of scenario that Eliezer is talking about, your perceptions of reality include seeming to find an error in the alleged proof that SS0 + SS0 = SSSS0 (and confirming your perception of an error sufficiently many times to outweigh all the times when you thought you’d confirmed that the alleged proof was valid). If that is the kind of “perception of reality” that we’re talking about, then you should conclude that there was an error in the alleged proof of SS0 + SS0 = SSSS0.
That is all good and valid, and of course I don’t believe in any results of deductions with errors in them just based on said deductions. But that has nothing to do with reality. Two pebbles plus two pebbles resulting in three pebbles is not what convinces me that SS0 + SS0 = SSS0; finding the error is, which is nothing that is perceived (i.e. it is purely abstract).
If we’re defining “situation” in a way similar to how it’s used in the top-level post (pebbles and stuff), then there simply can’t exist a situation that could convince me that SS0 + SS0 = SSSS0 is wrong in peano arithmetic. It might convince me to check peano arithmetic, of course, but that’s all.
I try to not argue about definition of words, but it just seems to me that as soon as you define words like “perception”, “situation”, “believe” etcetera in a way that would result in a situation capable of convincing me that SS0 + SS0 = SSS0 is true in peano arithmetic, we are not talking about reality anymore.
Okay, I just thought of a possible situation that would indeed “convince” me of 2 + 2 = 3: Disable the module of my brain responsible for logical reasoning, then show me some stage magic involving pebbles or earplugs, and then my poor rationalization module would probably end up with some explanation along the lines of 2 + 2 = 3.
As I see it, my faculty of logical reasoning is not flawed in any way. The only thing that’s flawed is my faculty of doing logical reasoning, i.e. I’m not always doing logical reasoning when I should be.
Sorry for not being clear. By “faculty of logical reasoning”, I mean nothing other than “faculty of doing logical reasoning”.
And another thing: It might be possible that if peano arithmetic didn’t apply to reality I wouldn’t have any beliefs about peano arithmetic because I might not even think of it. However there is no way I could establish the peano axioms and then believe that SS0 + SS0 = SSS0 is true within peano arithmetic. It’s just not possible.
The purpose of the article is only to describe some subjective experiences that would cause you to conclude that SS0+SS0 = SSS0 is true in all models of arithmetic. But Eliezer can only describe certain properties that those subjective experiences would have. He can’t make you have the experiences themselves.
So, for example, he could say that one such experience would conform to the following description: “You count up all the S’s on one side of the equation, and you count up all the S’s on the other side of the equation, and you find yourself getting the same answer again and again. You show the equation to other people, and they get the same answer again and again. You build a computer from scratch to count the S’s on both sides, and it says that there are the same number again and again.”
Such a description gives some features of an experience. The description provides a test that you could apply to any given experience and answer the question “Does this experience satisfy this description or not?” But the description is not like one in a novel, which, ideally, would induce you to have the experience, at least in your imagination. That is a separate and additional task beyond what this post set out to accomplish.
Yes, I am aware of that. However, I don’t think two pebbles on the table plus another two pebbles on the table resulting in three pebbles on the table could cause anyone sane to conclude that SS0 + SS0 = SSS0 is true in all models of arithmetic. In order to be convinced of that, you would have to assign “PA doesn’t apply to pebbles” a lower prior probability than “PA is wrong”.
The statement “PA applies to pebbles” (or anything else for that matter) doesn’t follow of the peano axioms in any way and is therefore not part of peano arithmetic. So what if peano arithmetic doesn’t apply to pebbles, there are other arithmetics that don’t either, and that doesn’t make them any wrong. You’re using them everyday in situations where they do apply.
A mathematical theory doesn’t consist of beliefs that are based on evidence; it’s an axiomatic system. There is no way any real-life situation could convince me that PA is false. Saying “SS0 + SS0 = SSS0 is true in all models of arithmetic” sounds like “0 = S0″ or “garble asdf qwerty sputz” to me. It just doesn’t make any sense.
Mathematics has nothing to do with experience, only to what extent mathematics applies to reality does.
That you have certain mathematical beliefs has a lot to do with the experiences that you have had. This applies in particular to your beliefs about what the theorems of PA are.
Sorry, I edited the statement in question right before you posted that because I anticipated a similar reaction. However, you’re still wrong. It has only to do with my beliefs to what extent peano arithmetic applies to reality, which is something completely different.
Edit: Ok, you’re probably not wrong, but it rather seems we are talking about different things when we say “mathematical beliefs”. Whether peano arithmetic applies to reality is not a mathematical belief for me.
Consider the experiences that you have had while reading and thinking about proofs within PA. (The experience of devising and confirming a proof is just a particular kind of experience, after all.) Are you saying that the contents of those experiences have had nothing to do with the beliefs that you have formed about what the theorems of PA are?
Suppose that those experiences had been systematically different in a certain way. Say that you consistently made a certain kind of mistake while confirming PA proofs, so that certain proofs seemed to be valid to you that don’t seem valid to you in reality. Would you not have arrived at different beliefs about what the theorems of PA are?
That is the sense in which your beliefs about what the theorems of PA are depend on your experiences.
I’m not sure I 100% understand what you’re saying, but the question “which beliefs will I end up with if logical reasoning itself is flawed” is of little interest to me.
Is the question “Which beliefs will I end up with if my faculty of logical reasoning is flawed” also of little interest to you?
Yes, because if I assume that my faculty of logical reasoning is flawed, no deductions of logical reasoning I do can be considered certain, in which case everything falls: Mathematics, physics, bayesianism, you name it. It is therefore (haha! but what if my faculty of logical reasoning is flawed?) very irrational to assume this.
But you know that your faculty of logical reasoning is flawed to some extent. Humans are not perfect logicians. We manage to find use in making long chains of logical deductions even though we know that they contain mistakes with some nonzero probability.
I don’t know that. Can you prove that under the assumption you’re making?
As I see it, my faculty of logical reasoning is not flawed in any way. The only thing that’s flawed is my faculty of doing logical reasoning, i.e. I’m not always doing logical reasoning when I should be. But that’s hardly the matter here.
I would be very interested in how you can come to any conclusion under the assumption that the logical reasoning you do to come to that conclusion is flawed. If my faculty of logical reasoning is flawed, I can only say one thing with certainty, which is that my faculty of logical reasoning is flawed. Actually, I don’t think I could even say that.
Edit:
I don’t consider this to be a problem of actual faculty of logical reasoning because if someone finds a logical mistake I will agree with them.
So you don’t consider mistakes in logical reasoning a problem because someone might point them out to you? What if it’s an easy mistake to make, and a lot of other people make the same mistake? At this point, it seems like you’re arguing about the definition of the words “problem with”, not about states of the world. Can you clarify what disagreement you have about states of the world?
I think the point is that mathematical reasoning is inherently self-correcting in this sense, and that this corrective force is intentionistic and Lamarckian—it is being corrected toward a mathematical argument which one thinks of as a timeless perfect Form (because come on, are there really any mathematicians who don’t, secretly, believe in the Platonic realism of mathematics?), and not just away from an argument that’s flawed.
An incorrect theory can appear to be supported by experimental results (with probability going to 0 as the sample size goes to \infty), and if you have the finite set of experimental results pointing to the wrong conclusion, then no amount of mind-internal examination of those results can correct the error (if it could, your theory would not be predictive; conservation of probability, you all know that). But mind-internal examination of a mathematical argument, without any further entangling (so no new information, in the Bayesian sense, about the outside world; only new information about the world inside your head), can discover the error, and once it has done so, it is typically a mechanical process to verify that the error is indeed an error and that the correction has indeed corrected that error.
This remains true if the error is an error of omission (We haven’t found the proof that T, so we don’t know that T, but in fact there is a proof of T).
So you’re not getting new bits from observed reality, yet you’re making new discoveries and overthrowing past mistakes. The bits are coming from the processing; your ignorance has decreased by computation without the acquisition of bits by entangling with the world. That’s why deductive knowledge is categorically different, and why errors in logical reasoning are not a problem with the idea of logical reasoning itself, nor do they exclude a mathematical statement from being unconditionally true. They just exclude the possibility of unconditional knowledge.
Can you conceive of a world in which, say, ⋀∅ is false? It’s certainly a lot harder than conceiving of a world in which earplugs obey “2+2=3”-arithmetic, but is your belief that ⋀∅ unconditional? What is the absolutely most fundamentally obvious tautology you can think of, and is your belief in it unconditional? If not, what kind of evidence could there be against it? It seems to me that ¬⋀∅ would require “there exists a false proposition which is an element of the empty set”; in order to make an error there I’d have to have made an error in looking up a definition, in which case I’m not really talking about ⋀∅ when I assert its truth; nonetheless the thing I am talking about is a tautological truth and so one still exists (I may have gained or lost a ‘box’, here, in which case things don’t work).
My mind is beginning to melt and I think I’ve drifted off topic a little. I should go to bed. (Sorry for rambling)
I guess there are my beliefs-which-predict-my-expectations and my aliefs-which-still-weird-me-out. In the sense of beliefs which predict my expectation, I would say the following about mathematics: as far as logic is concerned, I have seen (with my eyes, connected to neurons, and so on) the proof that from P&-P anything follows, and since I do want to distinguish “truth” from “falsehood”, I view it as (unless I made a mistake in the proof of P&-P->Q, which I view as highly unlikely—an easy million-to-one against) as false. Anything which leads me to P&-P, therefore, I see as false, conditional on the possibility I made a mistake in the proof (or not noticed a mistake someone else made). Since I have a proof from “2+2=3” to “2+2=3 and 2+2!=3″ (which is fairly simple, and I checked multiple times) I view 2+2=3 as equally unlikely. That’s surely entanglement with the world—I manipulated symbols written by a physical pen on a physical paper, and at each stage, the line following obeyed a relationship with the line before it. My belief that “there is some truth”, I guess, can be called unconditional—nothing I see will convince me otherwise. But I’m not even certain I can conceive of a world without truth, while I can conceive of a world, sadly, where there are mistakes in my proofs :)
You’re missing the essential point about deductives, which is this:
Changing the substrate used for the calculations does not change the experiment.
With a normal experiment, if you repeat my experiment it’s possible that your apparatus differs from mine in a way which (unbeknownst to either of us) is salient and affects the outcome.
With mathematical deduction, if our results disagree, (at least) one of us is simply wrong, it’s not “this datum is also valid but it’s data about a different set of conditions”, it’s “this datum contains an error in its derivation”. It is the same experiment, and the same computation, whether it is carried out on my brain, your brain, your brain using pen and paper as an external single-write store, theorem-prover software running on a Pentium, the same software running on an Athlon, different software in a different language running on a Babbage Analytical Engine… it’s still the same experiment. And a mistake in your proof really is a mistake, rather than the laws of mathematics having been momentarily false leading you to a false conclusion. To quote the article, “Unconditional facts are not the same as unconditional beliefs.” Contrapositive: conditional beliefs are not the same as conditional facts.
The only way in which your calculation entangled with the world is in terms of the reliability of pen-and-paper single-write storage; that reliability is not contingent on what the true laws of mathematics are, so the bits that come from that are not bits you can usefully entangle with. The bits that you can obtain about the true laws of mathematics are bits produced by computation.
I don’t consider these mistakes to be no problem at all. What I meant to say is that the existence of these noise errors doesn’t reduce the reasonabliness of me going around and using logical reasoning to draw deductions. Which also means that if reality seems to contradict my deductions, then either there is an error within my deductions that I can theoretically find, or there is an error within the line of thought that made me doubt my deductions, for example eyes being inadequate tools for counting pebbles. To put it more generally: If I don’t find errors within my deductions, then my perception of reality is not an appropriate measure for the truth of my deductions, unless said deductions deal in any way with the applicability of other deductions on reality, or reality in general, which mathematics does not.
It’s not as if errors in perceiving reality weren’t much more numerous and harder to detect than errors in anyone’s faculty of doing logical reasoning.
And the probability of an error in a given logical argument gets smaller as the chain of deductions gets shorter and as the number of verifications of the argument gets larger.
Nonetheless, the probability of error should never reach zero, even if the argument is as short as the proof that SS0 + SS0 = SSSS0 in PA, and even if the proof has been verified by yourself and others billions of times.
ETA: Where ever I wrote “proof” in this comment, I meant “alleged proof”. (Erm … except for in this ETA.)
The probability that there is an error within the line of thought that lets me come to the conclusion that there is an error within any theorem of peano arithmetic is always higher than the probability that there actually is an error within any theorem of peano arithmetic, since probability theory is based on peano arithmetic and if SS0 + SS0 = SSSS0 were wrong, probability theory would be at least equally wrong.
(Emphasis added.) Where ever I wrote “proof” in the grandparent comment, I should have written “alleged proof”.
We probably agree that the idea of “an error in a theorem of PA” isn’t meaningful. But the idea that everyone was making a mistake the whole time that they thought that SS0 + SS0 = SSSS0 was a theorem of PA, while, all along, SS0 + SS0 = SSS0 was a theorem of PA — that idea is meaningful. After all, people are all the time alleging that some statement is a theorem of PA when it really isn’t. That is to say, people make arithmetic mistakes all the time.
That is true. However, if your perception of reality leads you to the thought that there might be an error with SS0 + SS0 = SSSS0, and you can’t find that error, then it is irrational to assume that there actually is an error with SS0 + SS0 = SSSS0 rather than with your perception of rationality or the concept of applying SS0 + SS0 = SSSS0 to reality.
Can we agree on that?
I think so, if I understand you. But I think that you’re referring to a more restricted class of “perceptions of reality” than Eliezer is.
In the kind of scenario that Eliezer is talking about, your perceptions of reality include seeming to find an error in the alleged proof that SS0 + SS0 = SSSS0 (and confirming your perception of an error sufficiently many times to outweigh all the times when you thought you’d confirmed that the alleged proof was valid). If that is the kind of “perception of reality” that we’re talking about, then you should conclude that there was an error in the alleged proof of SS0 + SS0 = SSSS0.
That is all good and valid, and of course I don’t believe in any results of deductions with errors in them just based on said deductions. But that has nothing to do with reality. Two pebbles plus two pebbles resulting in three pebbles is not what convinces me that SS0 + SS0 = SSS0; finding the error is, which is nothing that is perceived (i.e. it is purely abstract).
If we’re defining “situation” in a way similar to how it’s used in the top-level post (pebbles and stuff), then there simply can’t exist a situation that could convince me that SS0 + SS0 = SSSS0 is wrong in peano arithmetic. It might convince me to check peano arithmetic, of course, but that’s all.
I try to not argue about definition of words, but it just seems to me that as soon as you define words like “perception”, “situation”, “believe” etcetera in a way that would result in a situation capable of convincing me that SS0 + SS0 = SSS0 is true in peano arithmetic, we are not talking about reality anymore.
Okay, I just thought of a possible situation that would indeed “convince” me of 2 + 2 = 3: Disable the module of my brain responsible for logical reasoning, then show me some stage magic involving pebbles or earplugs, and then my poor rationalization module would probably end up with some explanation along the lines of 2 + 2 = 3.
But let’s not go there.
Sorry for not being clear. By “faculty of logical reasoning”, I mean nothing other than “faculty of doing logical reasoning”.
In that case I have probably answered your original question here.
And another thing: It might be possible that if peano arithmetic didn’t apply to reality I wouldn’t have any beliefs about peano arithmetic because I might not even think of it. However there is no way I could establish the peano axioms and then believe that SS0 + SS0 = SSS0 is true within peano arithmetic. It’s just not possible.