Perhaps I am misreading you, but I think your gloss is incorrect. Eliezer’s point is about his map, not the territory. He is describing circumstances under which he would be convinced that 2 + 2 = 3, not circumstances under which 2 + 2 would actually be 3. I do not take him to be arguing (as you suggest) that math is physical, whatever that would mean. He is arguing that beliefs about math are physically instantiated, and subject to alteration by some possible physical process.
I’m afraid you lose me completely in the second part of your comment. Why is physics definitely not objective? And what does the similarity of math to modus ponens and its dissimilarity from empirical statements have to do with the subjective/objective distinction?
Perhaps I am misreading you, but I think your gloss is incorrect. Eliezer’s point is about his map, not the territory. He is describing circumstances under which he would be convinced that 2 + 2 = 3, not circumstances under which 2 + 2 would actually be 3. I do not take him to be arguing (as you suggest) that math is physical, whatever that would mean. He is arguing that beliefs about math are physically instantiated, and subject to alteration by some possible physical process.
In brief, Eliezer rejects a priori truths, and I don’t. An a priori truth is true without reference to empirical content, and believing that math has empirical content implies that other empirical evidence could appear that would falsify math. In short, Eliezer isn’t describing how he could come to belief 2 + 2 = 3, but how new evidence might show 2 + 2 would truly equaled 3
I’m afraid you lose me completely in the second part of your comment. Why is physics definitely not objective? And what does the similarity of math to modus ponens and its dissimilarity from empirical statements have to do with the subjective/objective distinction?
I’m just saying that basic arithmetic and modus ponens are analytic truths, and physics is not. The truth of physics assertions depends on empirical content.
In brief, Eliezer rejects a priori truths, and I don’t.
Have you always thought that? If not, what caused you to think that? When you were caused to think that, were you infinitely confident in what caused you to think that? If so, then how do you consider your failure to accept a priori truths while holding some? If no, then how do you justify believing several things likely and consequently believing something with infinite certainty, when each probable thing may be wrong?
I didn’t always know that (1) mathematical statements did not have empirical content, but I also didn’t always know (2) the pythagorean theory. I’m skeptical that those facts tell you anything about the truth of either assertion (1) or (2).
Not to commit the mind projection fallacy, but it does show that (3) is false, where (3) is “The pythagorean theory is so obviously true that all conscious minds must acknowledge it,” (many religions have similar tenets to this).
So (2) is the sort of thing that one becomes convinced of by things not themselves believed infinitely likely to be true, or at some point down the line there was a root belief of that belief that was the first thing thought infinitely likely to be true.
It’s this first thing infinitely likely to be true I am suspicious of.
What are the chances I have misread a random sentence? Higher than zero, in my experience. How then can I legitimately be infinitely convinced by sentences?
I think a better generalization is “Any intelligent being capable of recursive thought will accept the truth of a provable statement or be internally inconsistent.” But that formulation does have a “sufficiently intelligent” problem.
Consider some intelligent but non-mathematical subsection of society (i.e. lawyers). There are mathematical statements that are provable that some lawyer has not been exposed to, and so doesn’t think is true (or false). Further, there are likely to be provable statements that the lawyer has been exposed to that the lawyer lacks training (or intelligence?) to decide whether the statements are true.
I want to say that is a fact about the lawyer, or society, or bounded rationality. But it isn’t a very good response.
What are the chances I have misread a random sentence?
Errors are errors. And we are fallible creatures. If you don’t correct, then you are inconsistent without meaning to be so. If you do correct, then the fact of the error doesn’t tell you about the statement under investigation. And if you’d like to estimate the proportion of the time you make errors, that’s likely to be helpful in your decision-making, but it doesn’t convert non-empirical statements into empirical statements.
And a priori doesn’t mean true. There are lots of a priori false statements (e.g. 1=0 is not empirical, and also false).
There are mathematical statements that are provable that some lawyer has not been exposed to, and so doesn’t think is true (or false).
Doesn’t strongly believe are true, or false, or other. But the mind is not a void before the training, and after getting a degree in math still won’t be a void, but neither will it be a computer immune to gamma rays and quantum effects, working in PA with a proof that PA is consistent that uses only PA. It will be a fallible thing with good reason to believe it has correctly read and parsed definitions, etc.
If you do correct
We’re talking about errors I am committing without having detected. You discuss the case where I attempt to believe falsely, and accidentally believe something true? Or similar?
And if you’d like to estimate the proportion of the time you make errors, that’s likely to be helpful in your decision-making, but it doesn’t convert non-empirical statements into empirical statements.
Unfortunately, I have good reason to believe I imperfectly sort statements along the empirical/non-empirical divide.
Unfortunately, I have good reason to believe I imperfectly sort statements along the empirical/non-empirical divide.
“1 + 2 = 3” is a statement that lacks empirical content. “F = ma” is a statement that has empirical content and is falsifiable. “The way to maximize human flourishing is to build a friendly AI that implements CEV(everyone)” is a statement with empirical content that is not falsifiable.
Folk philosophers do a terrible job distinguishing between the categories “lacks empirical content” and “is not falsifiable.” Does that prove the categories are identical?
We’re talking about errors I am committing without having detected. You discuss the case where I attempt to believe falsely, and accidentally believe something true? Or similar?
I’m sorry, I don’t understand the question.
neither will it be a computer immune to gamma rays and quantum effects
Yes, there are ways to become delusion [delusional—oops]. It is worthwhile to estimate the likelihood of this possibility, but that isn’t what I’m trying to do here.
“1 + 2 = 3” is a statement that lacks empirical content. “F = ma” is a statement that has empirical content and is falsifiable. “The way to maximize human flourishing is to build a friendly AI that implements CEV(everyone)” is a statement with empirical content that is not falsifiable.
If you’re trying to demonstrate perfect ability to sort all statements into three bins, you have a lot more typing to do. If not, I don’t understand your point. Either you’re perfect at sorting such statements, or not. If not, there is a limit to how sure you should be that you correctly sorted each.
If you do correct [errors that you made but have not identified—Ed.], then the fact of the error doesn’t tell you about the statement under investigation.
I don’t know what this means.
there are ways to become delusion
?
It is worthwhile to estimate the likelihood of this possibility, but that isn’t what I’m trying to do here.
For each statement I believe true, I should estimate the chances of it being true < 1.
If you’re trying to demonstrate perfect ability to sort all statements into three bins, you have a lot more typing to do. If not, I don’t understand your point. Either you’re perfect at sorting such statements, or not. If not, there is a limit to how sure you should be that you correctly sorted each.
It is interesting that the all statements that we would like to be able to assign truth value to can be sorted into one of these three bins. Additional bins are not necessary, and fewer bins would be insufficient.
In short, Eliezer isn’t describing how he could come to belief 2 + 2 = 3, but how new evidence might show 2 + 2 would truly equaled 3.
From the beginning of his post:
I admit, I cannot conceive of a “situation” that would make 2 + 2 = 4 false. (There are redefinitions, but those are not “situations”, and then you’re no longer talking about 2, 4, =, or +.) But that doesn’t make my belief unconditional. I find it quite easy to imagine a situation which would convince me that 2 + 2 = 3.
So on the point of interpretation, I’m pretty sure you are wrong.
On the substantive point, I think reliance on traditional philosophical distinctions (a priori/a posteriori, analytic/synthetic) is a recipe for confusion. In my opinion (and I am far from the first to point this out) these distinctions are poorly articulated, if not downright incoherent. If you are going to employ these concepts, however, an important thing to keep in mind is the hard-won philosophical realization, stemming from a tradition stretching from Kant to Kripke, that the a priori/a posteriori distinction is orthogonal to the necessary/contingent distinction. The former is an epistemological distinction (propositions are justifiable a priori or a posteriori), and the latter is a metaphysical distinction (propositions are true/false necessarily or contingently).
My position (and, I believe, Eliezer’s) is that mathematical truths are necessarily true. A world in which 2 + 2 = 3 is impossible. This does not, however, entail that it is impossible to convince me that 2 + 2 = 3. Nor does it entail that empirical considerations are irrelevant to the justification of my belief that 2 + 2 = 4.
I am sure there is some proposition (perhaps some complicated mathematical truth) that you believe is necessarily true, but you are not certain that it is true. Maybe you are fairly confident but not entirely sure that you got the proof right. So even though you believe this proposition cannot possibly be false, you admit the possibility of evidence that would convince you it is false.
First, I do reject the analytic/synthetic distinction. It always seemed like Kant was trying to make something out of nothing there. But I do think that math lacks empirical content, which is why I label it a priori.
I am sure there is some proposition (perhaps some complicated mathematical truth) that you believe is necessarily true, but you are not certain that it is true. Maybe you are fairly confident but not entirely sure that you got the proof right. So even though you believe this proposition cannot possibly be false, you admit the possibility of evidence that would convince you it is false.
But if math is not empirical, then this way of talking about math makes it seem less certain than it really is. I may be fallible, and thus not know every mathematically or logically provable statement, but that doesn’t show anything about the nature of provable statements. A proof of the Pythagorean Theorem is not (empirical) evidence that the theorem is true. The proof (metaphysically) is the truth of the theorem.
That said, I would certainly appreciate suggestions on a deeper overview of the necessary/contingent distinction.
A proof of the Pythagorean Theorem is not (empirical) evidence that the theorem is true. The proof (metaphysically) is the truth of the theorem.
Consider the four color theorem. We have a proof by computer of this theorem, but it is far too complex for any human to verify. Would you agree that the fact that a computer built and programmed in a certain way claims to have proven the theorem is empirical evidence for the truth of the theorem? If yes, then why treat a proof computed by a human brain differently?
Consider the four color theorem. We have a proof by computer of this theorem, but it is far too complex for any human to verify. Would you agree that the fact that a computer built and programmed in a certain way claims to have proven the theorem is empirical evidence for the truth of the theorem?
No. The computer output is a strong justification for behaving as if all maps are four-colorable.
But if the “proof” cannot be understood, then the truth of the theorem is simply beyond human comprehension. We could petition the evolution fairy for a better brain. Then again, dogs don’t seem to mind that they can’t comprehend that the derivative of e^x is e^x.
No. The computer output is a strong justification for behaving as if all maps are four-colorable.
Would you feel differently if the proof were verified by a general AI? If not, how is this not just carbon chauvinism?
Also, if you want another example, consider the classification of finite simple groups. Here the combined proofs run into the 1000s of pages, and it is likely that no single human being has checked the entire thing. Is your analysis for that case different from that of the four color theorem?
Can the proof be understand by a motivated, human-intelligence Cartesian skeptic who is protected from errors of carelessness? Because a Cartesian skeptic will never derive true physics statements, no matter how much effort is applied, since the skeptic is cut off from empirical data by definition.
And I think that is an interesting distinction between math and physics.
I certainly admit that there are physical processes that could cause me to believe a false mathematical statement was true. But that is properly understood as a fact about me, and does not mean that math has any empirical content.
Can the proof be understand by a motivated, human-intelligence Cartesian skeptic who is protected from errors of carelessness?
To the same extent the proof of the four color theorem can be. It will just take orders of magnitude more time than any human has. So do you consider it to be proven in the same sense? Do you need to wait until such a person exists and does it? If so, why is that different?
Perhaps I am misreading you, but I think your gloss is incorrect. Eliezer’s point is about his map, not the territory. He is describing circumstances under which he would be convinced that 2 + 2 = 3, not circumstances under which 2 + 2 would actually be 3. I do not take him to be arguing (as you suggest) that math is physical, whatever that would mean. He is arguing that beliefs about math are physically instantiated, and subject to alteration by some possible physical process.
I’m afraid you lose me completely in the second part of your comment. Why is physics definitely not objective? And what does the similarity of math to modus ponens and its dissimilarity from empirical statements have to do with the subjective/objective distinction?
In brief, Eliezer rejects a priori truths, and I don’t. An a priori truth is true without reference to empirical content, and believing that math has empirical content implies that other empirical evidence could appear that would falsify math. In short, Eliezer isn’t describing how he could come to belief 2 + 2 = 3, but how new evidence might show 2 + 2 would truly equaled 3
I’m just saying that basic arithmetic and modus ponens are analytic truths, and physics is not. The truth of physics assertions depends on empirical content.
Have you always thought that? If not, what caused you to think that? When you were caused to think that, were you infinitely confident in what caused you to think that? If so, then how do you consider your failure to accept a priori truths while holding some? If no, then how do you justify believing several things likely and consequently believing something with infinite certainty, when each probable thing may be wrong?
I didn’t always know that (1) mathematical statements did not have empirical content, but I also didn’t always know (2) the pythagorean theory. I’m skeptical that those facts tell you anything about the truth of either assertion (1) or (2).
Not to commit the mind projection fallacy, but it does show that (3) is false, where (3) is “The pythagorean theory is so obviously true that all conscious minds must acknowledge it,” (many religions have similar tenets to this).
So (2) is the sort of thing that one becomes convinced of by things not themselves believed infinitely likely to be true, or at some point down the line there was a root belief of that belief that was the first thing thought infinitely likely to be true.
It’s this first thing infinitely likely to be true I am suspicious of.
What are the chances I have misread a random sentence? Higher than zero, in my experience. How then can I legitimately be infinitely convinced by sentences?
I think a better generalization is “Any intelligent being capable of recursive thought will accept the truth of a provable statement or be internally inconsistent.” But that formulation does have a “sufficiently intelligent” problem.
Consider some intelligent but non-mathematical subsection of society (i.e. lawyers). There are mathematical statements that are provable that some lawyer has not been exposed to, and so doesn’t think is true (or false). Further, there are likely to be provable statements that the lawyer has been exposed to that the lawyer lacks training (or intelligence?) to decide whether the statements are true.
I want to say that is a fact about the lawyer, or society, or bounded rationality. But it isn’t a very good response.
Errors are errors. And we are fallible creatures. If you don’t correct, then you are inconsistent without meaning to be so. If you do correct, then the fact of the error doesn’t tell you about the statement under investigation. And if you’d like to estimate the proportion of the time you make errors, that’s likely to be helpful in your decision-making, but it doesn’t convert non-empirical statements into empirical statements.
And a priori doesn’t mean true. There are lots of a priori false statements (e.g. 1=0 is not empirical, and also false).
Doesn’t strongly believe are true, or false, or other. But the mind is not a void before the training, and after getting a degree in math still won’t be a void, but neither will it be a computer immune to gamma rays and quantum effects, working in PA with a proof that PA is consistent that uses only PA. It will be a fallible thing with good reason to believe it has correctly read and parsed definitions, etc.
We’re talking about errors I am committing without having detected. You discuss the case where I attempt to believe falsely, and accidentally believe something true? Or similar?
Unfortunately, I have good reason to believe I imperfectly sort statements along the empirical/non-empirical divide.
“1 + 2 = 3” is a statement that lacks empirical content. “F = ma” is a statement that has empirical content and is falsifiable. “The way to maximize human flourishing is to build a friendly AI that implements CEV(everyone)” is a statement with empirical content that is not falsifiable.
Folk philosophers do a terrible job distinguishing between the categories “lacks empirical content” and “is not falsifiable.” Does that prove the categories are identical?
I’m sorry, I don’t understand the question.
Yes, there are ways to become delusion [delusional—oops]. It is worthwhile to estimate the likelihood of this possibility, but that isn’t what I’m trying to do here.
If you’re trying to demonstrate perfect ability to sort all statements into three bins, you have a lot more typing to do. If not, I don’t understand your point. Either you’re perfect at sorting such statements, or not. If not, there is a limit to how sure you should be that you correctly sorted each.
I don’t know what this means.
?
For each statement I believe true, I should estimate the chances of it being true < 1.
It is interesting that the all statements that we would like to be able to assign truth value to can be sorted into one of these three bins. Additional bins are not necessary, and fewer bins would be insufficient.
From the beginning of his post:
So on the point of interpretation, I’m pretty sure you are wrong.
On the substantive point, I think reliance on traditional philosophical distinctions (a priori/a posteriori, analytic/synthetic) is a recipe for confusion. In my opinion (and I am far from the first to point this out) these distinctions are poorly articulated, if not downright incoherent. If you are going to employ these concepts, however, an important thing to keep in mind is the hard-won philosophical realization, stemming from a tradition stretching from Kant to Kripke, that the a priori/a posteriori distinction is orthogonal to the necessary/contingent distinction. The former is an epistemological distinction (propositions are justifiable a priori or a posteriori), and the latter is a metaphysical distinction (propositions are true/false necessarily or contingently).
My position (and, I believe, Eliezer’s) is that mathematical truths are necessarily true. A world in which 2 + 2 = 3 is impossible. This does not, however, entail that it is impossible to convince me that 2 + 2 = 3. Nor does it entail that empirical considerations are irrelevant to the justification of my belief that 2 + 2 = 4.
I am sure there is some proposition (perhaps some complicated mathematical truth) that you believe is necessarily true, but you are not certain that it is true. Maybe you are fairly confident but not entirely sure that you got the proof right. So even though you believe this proposition cannot possibly be false, you admit the possibility of evidence that would convince you it is false.
Thanks for a really interesting reply.
First, I do reject the analytic/synthetic distinction. It always seemed like Kant was trying to make something out of nothing there. But I do think that math lacks empirical content, which is why I label it a priori.
But if math is not empirical, then this way of talking about math makes it seem less certain than it really is. I may be fallible, and thus not know every mathematically or logically provable statement, but that doesn’t show anything about the nature of provable statements. A proof of the Pythagorean Theorem is not (empirical) evidence that the theorem is true. The proof (metaphysically) is the truth of the theorem.
That said, I would certainly appreciate suggestions on a deeper overview of the necessary/contingent distinction.
Consider the four color theorem. We have a proof by computer of this theorem, but it is far too complex for any human to verify. Would you agree that the fact that a computer built and programmed in a certain way claims to have proven the theorem is empirical evidence for the truth of the theorem? If yes, then why treat a proof computed by a human brain differently?
No. The computer output is a strong justification for behaving as if all maps are four-colorable.
But if the “proof” cannot be understood, then the truth of the theorem is simply beyond human comprehension. We could petition the evolution fairy for a better brain. Then again, dogs don’t seem to mind that they can’t comprehend that the derivative of e^x is e^x.
Would you feel differently if the proof were verified by a general AI? If not, how is this not just carbon chauvinism?
Also, if you want another example, consider the classification of finite simple groups. Here the combined proofs run into the 1000s of pages, and it is likely that no single human being has checked the entire thing. Is your analysis for that case different from that of the four color theorem?
Can the proof be understand by a motivated, human-intelligence Cartesian skeptic who is protected from errors of carelessness? Because a Cartesian skeptic will never derive true physics statements, no matter how much effort is applied, since the skeptic is cut off from empirical data by definition.
And I think that is an interesting distinction between math and physics.
I certainly admit that there are physical processes that could cause me to believe a false mathematical statement was true. But that is properly understood as a fact about me, and does not mean that math has any empirical content.
To the same extent the proof of the four color theorem can be. It will just take orders of magnitude more time than any human has. So do you consider it to be proven in the same sense? Do you need to wait until such a person exists and does it? If so, why is that different?
Here’s a quick overview of the necessity/contingency distinction in philosophy. For a deeper overview, try Kripke’s Naming and Necessity.