Have you read the Sequences, or Sean Carrol’s ‘The Big Picture’? Both talk about these questions. For example:
We can appeal to empiricism to provide a foundation for logic, or we can appeal to logic to provide a foundation for empiricism, or we can connect the two in an infinitely recursive cycle.
simpler hypotheses are more likely to be true than complicated hypotheses
I’m not sure if this appeared in the Sequences or not, but there’s a purely logical argument that simpler hypotheses must be more likely. For any level of complexity, there are finitely many hypotheses that are simpler than that, and infinitely many that are more complex. You can use this to prove that any probability distribution must be biased towards simpler hypotheses.
We need not doubt all of mathematics, but we might do well to question what it is that we are trusting when we do not doubt all of mathematics.
“All of mathematics” might not be as coherent as you think. There’s debate around the foundations.
For example:
Should the foundation be set theory (ZF axioms), or constructive type theory?
Axiom of Choice: true or false?
Law of excluded middle: true or false?
(I’m not a mathematician, so take this with a grain of salt.)
There are two very different notions of what it means for some math to be “true”. One is that the statement in question follows from the axioms you’re assuming. The other is that you’re using this piece of math to model the real world, and the corresponding statement about the real world is true. For example, “2 + 2 = 4“ can be proved using the Peano axioms, with no regard to the world at all. But there are also (multiple!) real situations that “2 + 2 = 4” models. One is that if you put two cups together with two other cups, you’ll have four cups. Another is that if you pour two gallons of gas into a car that already has two gallons of gas, the car will have four gallons. In this second model, it’s also true that “1/2 + 1⁄2 = 1”. In the first model, it isn’t: the correspondence breaks down because no one wants a shattered cup.
I’m actually very interested to see what assumptions about the real world correspond to mathematical axioms. For example, if you interpret mathematical statements to be “objectively” true then the law of the excluded middle is true, but if you interpret them to be about knowledge or about provability, then the law of the excluded middle is false. I have no idea what the axiom of choice is about, though.
I am asking you to doubt that your reason for correctly (in my estimation) not doubting ethics can be found within ethics.
Have you read about Hume’s is-ought distinction? He writes about it in ‘A Treatise of Human Nature’. It says that ought-statements cannot be derived from is-statements alone. You can derive an is-statement from another, for example by using modus-ponens. And you can derive one ought-statement from another ought-statement, plus some is-statement reasoning, for example “you shouldn’t punch him because that would hurt him, and someone being hurt is bad”. But you can’t go from pure is-statements to an ought-statement. Yudkowski says similar things. Once you instinctively see this distinction, it’s not even tempting to look for an ultimate justification of ethics or within empiricism, because it’s obviously not there.
The problem, I suspect, is that these questions of deep doubt in fact play within our minds all the time, and hinder our capacity to get on with our work.
It’s always dangerous to put thoughts in other people’s minds! These questions really truly do not play within my mind. I find them interesting, but doubt they’re of much practical importance, and they do not bother me. I’m sure I’m not alone.
It seems like you are unhappy without having “a satisfying conceptual answer to ‘why should I believe it?’ within the systems that we are questioning.” Why is that? Do you not want to strongly believe in something without a strong and non-cyclic conceptual justification for it?
Do you not want to strongly believe in something without a strong and non-cyclic conceptual justification for it?
It’s not that I don’t want to strongly believe in something without a strong and non-cyclic conceptual justification for it. It’s that I want my actions to help reduce existential risk, and in order to do that I use reasoning, and so it’s important to me that I use the kind of reasoning that actually helps me to reduce existential risk, so I am interested in what aspects of my reasoning are trustworthy or not.
Now you have linked to many compelling impossibility arguments. Hume’s is-ought gap, the problem of induction, and many of Eliezer’s writings rule out whole regions of the space of possible resolutions to this problem, just as the relativization barrier in computational complexity theory rules out whole regions of the space of possible resolutions to the P versus NP problem. So, good, let’s not look in the places that we can definitively rule out (and I do agree that the arguments you have linked to in fact soundly rule out their respective regions of the resolution space).
Given all that, how do you determine whether your reasoning is trustworthy?
If you ask me whether my reasoning is trustworthy, I guess I’ll look at how I’m thinking at a meta-level and see if there are logical justifications for that category of thinking, plus look at examples of my thinking in the past, and see how often I was right. So roughly your “emperical” and “logical” foundations.
And I sometimes use my reasoning to bootstrap myself to better reasoning. For example, I didn’t used to be Bayesian; I did not intuitively view my beliefs as having probabilities associated with them. Then I read Rationality, and was convinced by both theoretical arguments and practical examples that being Bayesian was a better way of thinking, and now that’s how I think. I had to evaluate the arguments in favor of Bayesianism in terms of my previous means of reasoning—which was overall more haphazard, but fortunately good enough to recognize the upgrade.
From the phrasing you used, it sounded to me like you were searching for some Ultimate Justification that could by definition only be found in regions of the space that have been ruled out by impossibility arguments. But it sounds like you’re well aware of those reasons, and must be looking elsewhere; sorry for misunderstanding.
But honestly I still don’t know what you mean by “trustworthy”. What is the concern, specifically? Is it:
That there are flaws in the way we think, for example the Wikipedia list of biases?
That there’s an influential bias that we haven’t recognized?
That there’s something fundamentally wrong with the way that we reason, such that most of our conclusions are wrong and we can’t even recognize it?
That our reasoning is fine, but we lack a good justification for it?
If you are going to make very confident claims, you need a very strong basis. That’s one sense in which you need trustworthiness. But if you are not going to make very confident claims,you needn’t worry.
If you are going to promote a narrow epistemology based on , for instance just science, or just Bayes, then you a justification for it that doesn’t also justify everything you want to exclude from your narrow epistemology. Circular justification would justify anything that’s self consistent, so it’s not good enough.
If you’re not doing either of the above, then you can just embrace a liberal , pluralistic approach, and not worry .
I’m not sure if this appeared in the Sequences or not, but there’s a purely logical argument that simpler hypotheses must be more likely. For any level of complexity, there are finitely many hypotheses that are simpler than that, and infinitely many that are more complex. You can use this to prove that any probability distribution must be biased towards simpler hypotheses
Yes, but that doesn’t tell you that:-
you have a unique way of picking out the simplest hypothesis. The standard intuition is there is a single truth, but there are multiple ways of defining simplicity.
you are picking it out of the total.hypothesis space , ie. the hypotheses.you are considering add up to one, in an absolute sense. Solomonoff Induction is limited to computable universes, for instance.
Have you read the Sequences, or Sean Carrol’s ‘The Big Picture’? Both talk about these questions. For example:
See explain-worship-ignore
and more generally
mysterious-answers
See no-universally-compelling-arguments-in-math-or-science
and more generally
mind-space
I’m not sure if this appeared in the Sequences or not, but there’s a purely logical argument that simpler hypotheses must be more likely. For any level of complexity, there are finitely many hypotheses that are simpler than that, and infinitely many that are more complex. You can use this to prove that any probability distribution must be biased towards simpler hypotheses.
“All of mathematics” might not be as coherent as you think. There’s debate around the foundations. For example:
Should the foundation be set theory (ZF axioms), or constructive type theory?
Axiom of Choice: true or false?
Law of excluded middle: true or false?
(I’m not a mathematician, so take this with a grain of salt.)
There are two very different notions of what it means for some math to be “true”. One is that the statement in question follows from the axioms you’re assuming. The other is that you’re using this piece of math to model the real world, and the corresponding statement about the real world is true. For example, “2 + 2 = 4“ can be proved using the Peano axioms, with no regard to the world at all. But there are also (multiple!) real situations that “2 + 2 = 4” models. One is that if you put two cups together with two other cups, you’ll have four cups. Another is that if you pour two gallons of gas into a car that already has two gallons of gas, the car will have four gallons. In this second model, it’s also true that “1/2 + 1⁄2 = 1”. In the first model, it isn’t: the correspondence breaks down because no one wants a shattered cup.
I’m actually very interested to see what assumptions about the real world correspond to mathematical axioms. For example, if you interpret mathematical statements to be “objectively” true then the law of the excluded middle is true, but if you interpret them to be about knowledge or about provability, then the law of the excluded middle is false. I have no idea what the axiom of choice is about, though.
the-simple-truth
Have you read about Hume’s is-ought distinction? He writes about it in ‘A Treatise of Human Nature’. It says that ought-statements cannot be derived from is-statements alone. You can derive an is-statement from another, for example by using modus-ponens. And you can derive one ought-statement from another ought-statement, plus some is-statement reasoning, for example “you shouldn’t punch him because that would hurt him, and someone being hurt is bad”. But you can’t go from pure is-statements to an ought-statement. Yudkowski says similar things. Once you instinctively see this distinction, it’s not even tempting to look for an ultimate justification of ethics or within empiricism, because it’s obviously not there.
It’s always dangerous to put thoughts in other people’s minds! These questions really truly do not play within my mind. I find them interesting, but doubt they’re of much practical importance, and they do not bother me. I’m sure I’m not alone.
It seems like you are unhappy without having “a satisfying conceptual answer to ‘why should I believe it?’ within the systems that we are questioning.” Why is that? Do you not want to strongly believe in something without a strong and non-cyclic conceptual justification for it?
It’s not that I don’t want to strongly believe in something without a strong and non-cyclic conceptual justification for it. It’s that I want my actions to help reduce existential risk, and in order to do that I use reasoning, and so it’s important to me that I use the kind of reasoning that actually helps me to reduce existential risk, so I am interested in what aspects of my reasoning are trustworthy or not.
Now you have linked to many compelling impossibility arguments. Hume’s is-ought gap, the problem of induction, and many of Eliezer’s writings rule out whole regions of the space of possible resolutions to this problem, just as the relativization barrier in computational complexity theory rules out whole regions of the space of possible resolutions to the P versus NP problem. So, good, let’s not look in the places that we can definitively rule out (and I do agree that the arguments you have linked to in fact soundly rule out their respective regions of the resolution space).
Given all that, how do you determine whether your reasoning is trustworthy?
If you ask me whether my reasoning is trustworthy, I guess I’ll look at how I’m thinking at a meta-level and see if there are logical justifications for that category of thinking, plus look at examples of my thinking in the past, and see how often I was right. So roughly your “emperical” and “logical” foundations.
And I sometimes use my reasoning to bootstrap myself to better reasoning. For example, I didn’t used to be Bayesian; I did not intuitively view my beliefs as having probabilities associated with them. Then I read Rationality, and was convinced by both theoretical arguments and practical examples that being Bayesian was a better way of thinking, and now that’s how I think. I had to evaluate the arguments in favor of Bayesianism in terms of my previous means of reasoning—which was overall more haphazard, but fortunately good enough to recognize the upgrade.
From the phrasing you used, it sounded to me like you were searching for some Ultimate Justification that could by definition only be found in regions of the space that have been ruled out by impossibility arguments. But it sounds like you’re well aware of those reasons, and must be looking elsewhere; sorry for misunderstanding.
But honestly I still don’t know what you mean by “trustworthy”. What is the concern, specifically? Is it:
That there are flaws in the way we think, for example the Wikipedia list of biases?
That there’s an influential bias that we haven’t recognized?
That there’s something fundamentally wrong with the way that we reason, such that most of our conclusions are wrong and we can’t even recognize it?
That our reasoning is fine, but we lack a good justification for it?
Something else?
If you are going to make very confident claims, you need a very strong basis. That’s one sense in which you need trustworthiness. But if you are not going to make very confident claims,you needn’t worry.
If you are going to promote a narrow epistemology based on , for instance just science, or just Bayes, then you a justification for it that doesn’t also justify everything you want to exclude from your narrow epistemology. Circular justification would justify anything that’s self consistent, so it’s not good enough.
If you’re not doing either of the above, then you can just embrace a liberal , pluralistic approach, and not worry .
Yes, but that doesn’t tell you that:-
you have a unique way of picking out the simplest hypothesis. The standard intuition is there is a single truth, but there are multiple ways of defining simplicity.
you are picking it out of the total.hypothesis space , ie. the hypotheses.you are considering add up to one, in an absolute sense. Solomonoff Induction is limited to computable universes, for instance.