In brief, Eliezer rejects a priori truths, and I don’t.
Have you always thought that? If not, what caused you to think that? When you were caused to think that, were you infinitely confident in what caused you to think that? If so, then how do you consider your failure to accept a priori truths while holding some? If no, then how do you justify believing several things likely and consequently believing something with infinite certainty, when each probable thing may be wrong?
I didn’t always know that (1) mathematical statements did not have empirical content, but I also didn’t always know (2) the pythagorean theory. I’m skeptical that those facts tell you anything about the truth of either assertion (1) or (2).
Not to commit the mind projection fallacy, but it does show that (3) is false, where (3) is “The pythagorean theory is so obviously true that all conscious minds must acknowledge it,” (many religions have similar tenets to this).
So (2) is the sort of thing that one becomes convinced of by things not themselves believed infinitely likely to be true, or at some point down the line there was a root belief of that belief that was the first thing thought infinitely likely to be true.
It’s this first thing infinitely likely to be true I am suspicious of.
What are the chances I have misread a random sentence? Higher than zero, in my experience. How then can I legitimately be infinitely convinced by sentences?
I think a better generalization is “Any intelligent being capable of recursive thought will accept the truth of a provable statement or be internally inconsistent.” But that formulation does have a “sufficiently intelligent” problem.
Consider some intelligent but non-mathematical subsection of society (i.e. lawyers). There are mathematical statements that are provable that some lawyer has not been exposed to, and so doesn’t think is true (or false). Further, there are likely to be provable statements that the lawyer has been exposed to that the lawyer lacks training (or intelligence?) to decide whether the statements are true.
I want to say that is a fact about the lawyer, or society, or bounded rationality. But it isn’t a very good response.
What are the chances I have misread a random sentence?
Errors are errors. And we are fallible creatures. If you don’t correct, then you are inconsistent without meaning to be so. If you do correct, then the fact of the error doesn’t tell you about the statement under investigation. And if you’d like to estimate the proportion of the time you make errors, that’s likely to be helpful in your decision-making, but it doesn’t convert non-empirical statements into empirical statements.
And a priori doesn’t mean true. There are lots of a priori false statements (e.g. 1=0 is not empirical, and also false).
There are mathematical statements that are provable that some lawyer has not been exposed to, and so doesn’t think is true (or false).
Doesn’t strongly believe are true, or false, or other. But the mind is not a void before the training, and after getting a degree in math still won’t be a void, but neither will it be a computer immune to gamma rays and quantum effects, working in PA with a proof that PA is consistent that uses only PA. It will be a fallible thing with good reason to believe it has correctly read and parsed definitions, etc.
If you do correct
We’re talking about errors I am committing without having detected. You discuss the case where I attempt to believe falsely, and accidentally believe something true? Or similar?
And if you’d like to estimate the proportion of the time you make errors, that’s likely to be helpful in your decision-making, but it doesn’t convert non-empirical statements into empirical statements.
Unfortunately, I have good reason to believe I imperfectly sort statements along the empirical/non-empirical divide.
Unfortunately, I have good reason to believe I imperfectly sort statements along the empirical/non-empirical divide.
“1 + 2 = 3” is a statement that lacks empirical content. “F = ma” is a statement that has empirical content and is falsifiable. “The way to maximize human flourishing is to build a friendly AI that implements CEV(everyone)” is a statement with empirical content that is not falsifiable.
Folk philosophers do a terrible job distinguishing between the categories “lacks empirical content” and “is not falsifiable.” Does that prove the categories are identical?
We’re talking about errors I am committing without having detected. You discuss the case where I attempt to believe falsely, and accidentally believe something true? Or similar?
I’m sorry, I don’t understand the question.
neither will it be a computer immune to gamma rays and quantum effects
Yes, there are ways to become delusion [delusional—oops]. It is worthwhile to estimate the likelihood of this possibility, but that isn’t what I’m trying to do here.
“1 + 2 = 3” is a statement that lacks empirical content. “F = ma” is a statement that has empirical content and is falsifiable. “The way to maximize human flourishing is to build a friendly AI that implements CEV(everyone)” is a statement with empirical content that is not falsifiable.
If you’re trying to demonstrate perfect ability to sort all statements into three bins, you have a lot more typing to do. If not, I don’t understand your point. Either you’re perfect at sorting such statements, or not. If not, there is a limit to how sure you should be that you correctly sorted each.
If you do correct [errors that you made but have not identified—Ed.], then the fact of the error doesn’t tell you about the statement under investigation.
I don’t know what this means.
there are ways to become delusion
?
It is worthwhile to estimate the likelihood of this possibility, but that isn’t what I’m trying to do here.
For each statement I believe true, I should estimate the chances of it being true < 1.
If you’re trying to demonstrate perfect ability to sort all statements into three bins, you have a lot more typing to do. If not, I don’t understand your point. Either you’re perfect at sorting such statements, or not. If not, there is a limit to how sure you should be that you correctly sorted each.
It is interesting that the all statements that we would like to be able to assign truth value to can be sorted into one of these three bins. Additional bins are not necessary, and fewer bins would be insufficient.
Have you always thought that? If not, what caused you to think that? When you were caused to think that, were you infinitely confident in what caused you to think that? If so, then how do you consider your failure to accept a priori truths while holding some? If no, then how do you justify believing several things likely and consequently believing something with infinite certainty, when each probable thing may be wrong?
I didn’t always know that (1) mathematical statements did not have empirical content, but I also didn’t always know (2) the pythagorean theory. I’m skeptical that those facts tell you anything about the truth of either assertion (1) or (2).
Not to commit the mind projection fallacy, but it does show that (3) is false, where (3) is “The pythagorean theory is so obviously true that all conscious minds must acknowledge it,” (many religions have similar tenets to this).
So (2) is the sort of thing that one becomes convinced of by things not themselves believed infinitely likely to be true, or at some point down the line there was a root belief of that belief that was the first thing thought infinitely likely to be true.
It’s this first thing infinitely likely to be true I am suspicious of.
What are the chances I have misread a random sentence? Higher than zero, in my experience. How then can I legitimately be infinitely convinced by sentences?
I think a better generalization is “Any intelligent being capable of recursive thought will accept the truth of a provable statement or be internally inconsistent.” But that formulation does have a “sufficiently intelligent” problem.
Consider some intelligent but non-mathematical subsection of society (i.e. lawyers). There are mathematical statements that are provable that some lawyer has not been exposed to, and so doesn’t think is true (or false). Further, there are likely to be provable statements that the lawyer has been exposed to that the lawyer lacks training (or intelligence?) to decide whether the statements are true.
I want to say that is a fact about the lawyer, or society, or bounded rationality. But it isn’t a very good response.
Errors are errors. And we are fallible creatures. If you don’t correct, then you are inconsistent without meaning to be so. If you do correct, then the fact of the error doesn’t tell you about the statement under investigation. And if you’d like to estimate the proportion of the time you make errors, that’s likely to be helpful in your decision-making, but it doesn’t convert non-empirical statements into empirical statements.
And a priori doesn’t mean true. There are lots of a priori false statements (e.g. 1=0 is not empirical, and also false).
Doesn’t strongly believe are true, or false, or other. But the mind is not a void before the training, and after getting a degree in math still won’t be a void, but neither will it be a computer immune to gamma rays and quantum effects, working in PA with a proof that PA is consistent that uses only PA. It will be a fallible thing with good reason to believe it has correctly read and parsed definitions, etc.
We’re talking about errors I am committing without having detected. You discuss the case where I attempt to believe falsely, and accidentally believe something true? Or similar?
Unfortunately, I have good reason to believe I imperfectly sort statements along the empirical/non-empirical divide.
“1 + 2 = 3” is a statement that lacks empirical content. “F = ma” is a statement that has empirical content and is falsifiable. “The way to maximize human flourishing is to build a friendly AI that implements CEV(everyone)” is a statement with empirical content that is not falsifiable.
Folk philosophers do a terrible job distinguishing between the categories “lacks empirical content” and “is not falsifiable.” Does that prove the categories are identical?
I’m sorry, I don’t understand the question.
Yes, there are ways to become delusion [delusional—oops]. It is worthwhile to estimate the likelihood of this possibility, but that isn’t what I’m trying to do here.
If you’re trying to demonstrate perfect ability to sort all statements into three bins, you have a lot more typing to do. If not, I don’t understand your point. Either you’re perfect at sorting such statements, or not. If not, there is a limit to how sure you should be that you correctly sorted each.
I don’t know what this means.
?
For each statement I believe true, I should estimate the chances of it being true < 1.
It is interesting that the all statements that we would like to be able to assign truth value to can be sorted into one of these three bins. Additional bins are not necessary, and fewer bins would be insufficient.