Do you not want to strongly believe in something without a strong and non-cyclic conceptual justification for it?
It’s not that I don’t want to strongly believe in something without a strong and non-cyclic conceptual justification for it. It’s that I want my actions to help reduce existential risk, and in order to do that I use reasoning, and so it’s important to me that I use the kind of reasoning that actually helps me to reduce existential risk, so I am interested in what aspects of my reasoning are trustworthy or not.
Now you have linked to many compelling impossibility arguments. Hume’s is-ought gap, the problem of induction, and many of Eliezer’s writings rule out whole regions of the space of possible resolutions to this problem, just as the relativization barrier in computational complexity theory rules out whole regions of the space of possible resolutions to the P versus NP problem. So, good, let’s not look in the places that we can definitively rule out (and I do agree that the arguments you have linked to in fact soundly rule out their respective regions of the resolution space).
Given all that, how do you determine whether your reasoning is trustworthy?
If you ask me whether my reasoning is trustworthy, I guess I’ll look at how I’m thinking at a meta-level and see if there are logical justifications for that category of thinking, plus look at examples of my thinking in the past, and see how often I was right. So roughly your “emperical” and “logical” foundations.
And I sometimes use my reasoning to bootstrap myself to better reasoning. For example, I didn’t used to be Bayesian; I did not intuitively view my beliefs as having probabilities associated with them. Then I read Rationality, and was convinced by both theoretical arguments and practical examples that being Bayesian was a better way of thinking, and now that’s how I think. I had to evaluate the arguments in favor of Bayesianism in terms of my previous means of reasoning—which was overall more haphazard, but fortunately good enough to recognize the upgrade.
From the phrasing you used, it sounded to me like you were searching for some Ultimate Justification that could by definition only be found in regions of the space that have been ruled out by impossibility arguments. But it sounds like you’re well aware of those reasons, and must be looking elsewhere; sorry for misunderstanding.
But honestly I still don’t know what you mean by “trustworthy”. What is the concern, specifically? Is it:
That there are flaws in the way we think, for example the Wikipedia list of biases?
That there’s an influential bias that we haven’t recognized?
That there’s something fundamentally wrong with the way that we reason, such that most of our conclusions are wrong and we can’t even recognize it?
That our reasoning is fine, but we lack a good justification for it?
If you are going to make very confident claims, you need a very strong basis. That’s one sense in which you need trustworthiness. But if you are not going to make very confident claims,you needn’t worry.
If you are going to promote a narrow epistemology based on , for instance just science, or just Bayes, then you a justification for it that doesn’t also justify everything you want to exclude from your narrow epistemology. Circular justification would justify anything that’s self consistent, so it’s not good enough.
If you’re not doing either of the above, then you can just embrace a liberal , pluralistic approach, and not worry .
It’s not that I don’t want to strongly believe in something without a strong and non-cyclic conceptual justification for it. It’s that I want my actions to help reduce existential risk, and in order to do that I use reasoning, and so it’s important to me that I use the kind of reasoning that actually helps me to reduce existential risk, so I am interested in what aspects of my reasoning are trustworthy or not.
Now you have linked to many compelling impossibility arguments. Hume’s is-ought gap, the problem of induction, and many of Eliezer’s writings rule out whole regions of the space of possible resolutions to this problem, just as the relativization barrier in computational complexity theory rules out whole regions of the space of possible resolutions to the P versus NP problem. So, good, let’s not look in the places that we can definitively rule out (and I do agree that the arguments you have linked to in fact soundly rule out their respective regions of the resolution space).
Given all that, how do you determine whether your reasoning is trustworthy?
If you ask me whether my reasoning is trustworthy, I guess I’ll look at how I’m thinking at a meta-level and see if there are logical justifications for that category of thinking, plus look at examples of my thinking in the past, and see how often I was right. So roughly your “emperical” and “logical” foundations.
And I sometimes use my reasoning to bootstrap myself to better reasoning. For example, I didn’t used to be Bayesian; I did not intuitively view my beliefs as having probabilities associated with them. Then I read Rationality, and was convinced by both theoretical arguments and practical examples that being Bayesian was a better way of thinking, and now that’s how I think. I had to evaluate the arguments in favor of Bayesianism in terms of my previous means of reasoning—which was overall more haphazard, but fortunately good enough to recognize the upgrade.
From the phrasing you used, it sounded to me like you were searching for some Ultimate Justification that could by definition only be found in regions of the space that have been ruled out by impossibility arguments. But it sounds like you’re well aware of those reasons, and must be looking elsewhere; sorry for misunderstanding.
But honestly I still don’t know what you mean by “trustworthy”. What is the concern, specifically? Is it:
That there are flaws in the way we think, for example the Wikipedia list of biases?
That there’s an influential bias that we haven’t recognized?
That there’s something fundamentally wrong with the way that we reason, such that most of our conclusions are wrong and we can’t even recognize it?
That our reasoning is fine, but we lack a good justification for it?
Something else?
If you are going to make very confident claims, you need a very strong basis. That’s one sense in which you need trustworthiness. But if you are not going to make very confident claims,you needn’t worry.
If you are going to promote a narrow epistemology based on , for instance just science, or just Bayes, then you a justification for it that doesn’t also justify everything you want to exclude from your narrow epistemology. Circular justification would justify anything that’s self consistent, so it’s not good enough.
If you’re not doing either of the above, then you can just embrace a liberal , pluralistic approach, and not worry .