The rabbit hole problem is solved by recognizing when we have made the best determination we can with current information. Once that is done, we stop.
If inference to the best explanation is included, we can’t do that. We can know when we have exhausted all the prima facie evidence, but we can’t know when we have exhausted every possible explanation for it. What you haven’t thought of yet, you haven’t thought of. Compare with the problem of knowingly arriving at the final and perfect theory of physics,
This is a useful bit of clarification, and timely.
Would that change if there was a mechanism for describing the criteria for the best explanation?
For example, could we show from a body of evidence the minimum entropy, and therefore even if there are other explanations they are at best equivalent?
Equivalent in what sense? The fact that you can have equivalently predictive theories with different ontological implications is a large part of the problem.
Another part is that you don’t have exhaustive knowledge of all possible theories. Being able to algorithmically check how good a theory is, a tall ordet, but even if you had one it would not be able to tell you that you had hit the best possible theory , only the best out of the N fed into it.
Let me try to restate, to be sure I have understood correctly:
We cannot stop once we have exhausted the evidence because explanations of equal predictive power have different ontological implications, and these implications must be accounted for in determining the best explanation. Further, we don’t have a way to exclude other ontological implications we have not considered.
Question: why don’t the ontological implications of our method of analysis constrain us to observing explanations with similar ontological implications?
Question: why don’t the ontological implications of our method of analysis constrain us to observing explanations with similar ontological implications?
Maybe they can[*], but it is not exactly a good thing...if you stick to one method of analysis, you will be in an echo chamber.
[*}An example might be the way reality looks mathematical to physics, which some people are willing to take fairly literally.
Echo chamber implies getting the same information back.
It would be more accurate to say we will inevitably reach a local maxima. Awareness of the ontological implications should be a useful tool in helping us recognize when we are there and which way to go next.
Without pursuing the analysis to its maximal conclusions, how can we distinguish the merits of different ontologies?
Without having a way of ranging across ontologyspace, how can we distinguish the merits of different ontologies? But we don’t have such a way. In its absence, we can pursue an ontology to the point of breakdown, whereupon we have no clear path onwards. It can also be a slow of process … it took centuries for scholastic philosophers to reach that point with the Aristotelian framework.
Alternatively, if an ontology works, that is no proof that it ia the best possible ontology, or the final answer...again because of the impossibility of crawling across ontologyspace.
This sounds strongly like we have no grounds for considering ontology at all when determining what the best possible explanation.
We cannot qualitatively distinguish between ontologies, except through the other qualities we were already examining.
We don’t have a way of searching for new ontologies.
So it looks like all we have done is go from best possible explanation to best available explanation where some superior explanation occupies a space of almost-zero in our probability distribution.
We cannot qualitatively distinguish between ontologies, except through the other qualities we were already examining.
If that is supposed to mean that every ontology comes with its own isolated, tailor-made criteria, and that there are no others … then I dont think the situation is quite that bad: it;’s partly true, but there are also criteria that span ontologies, like parsimony.
We don’t have a way of searching for new ontologies.
The point is that we don’t have a mechanical, algorithmic way of searching for new ontologies. (It’s a very lesswronging piece of thinking to suppose that means there is no way at all). Clearly , we come up with new ontologies from time to time. In the absence of an algorithm for constructing ontologies, doing so is more of a createive process, and in the absence of algorithmic criteria for evaluating them, doing so is more like an aesthetic process.
My overall points are that
1) Philosophy is genuinely difficult..its failure to churn out results rapidly isn’t due to a boneheaded refusal to adopt some one-size-fits all algorithm such as Bayes...
2) … because there is currently no algorithm that covers everything you would want to do.
So it looks like all we have done is go from best possible explanation to best available explanation where some superior explanation occupies a space of almost-zero in our probability distribution.
it’s a one word difference, but it’s very significant difference in terms of implications. For instance, we can;t quantify how far the best available explanation is from the best possible explanation. That can mean that the use of probablistic reasoning does’t go far enough.
If that is supposed to mean that every ontology comes with its own isolated, tailor-made criteria, and that there are no others
I mean to say we are not ontologically motivated. The examples OP gave aren’t ontological questions, only questions with ontological implications, which makes the ontology descriptive rather than prescriptive. That the implications carry forward only makes the description consistent.
In the scholastic case, my sense of the process of moving beyond Aristotle is that it relied on things happening that disagreed with Aristotle, which weren’t motivated by testing Aristotle. Architecture and siege engines did for falling objects, for example.
I agree with your points. I am now experiencing some disquiet about how slippery the notion of ‘best’ is. I wonder how one would distinguish whether it was undefinable or not.
Who’s “we”? Lesswrongians seem pretty motivated to assert the correctness of physicalism and wrongness of dualism, supernaturalism,, etc.
The examples OP gave aren’t ontological questions, only questions with ontological implications, which makes the ontology descriptive rather than prescriptive. That the implications carry forward only makes the description consistent.
I’m not following that. Can you give concrete examples?
In the scholastic case, my sense of the process of moving beyond Aristotle is that it relied on things happening that disagreed with Aristotle, which weren’t motivated by testing Aristotle. Architecture and siege engines did for falling objects, for example.
What I had in mind was Aristotelean metaphysics, not Aristotelean physics. The metaphysics, the accident/essence distinction and so on, failed separately.
If inference to the best explanation is included, we can’t do that. We can know when we have exhausted all the prima facie evidence, but we can’t know when we have exhausted every possible explanation for it. What you haven’t thought of yet, you haven’t thought of. Compare with the problem of knowingly arriving at the final and perfect theory of physics,
This is a useful bit of clarification, and timely.
Would that change if there was a mechanism for describing the criteria for the best explanation?
For example, could we show from a body of evidence the minimum entropy, and therefore even if there are other explanations they are at best equivalent?
Equivalent in what sense? The fact that you can have equivalently predictive theories with different ontological implications is a large part of the problem.
Another part is that you don’t have exhaustive knowledge of all possible theories. Being able to algorithmically check how good a theory is, a tall ordet, but even if you had one it would not be able to tell you that you had hit the best possible theory , only the best out of the N fed into it.
Let me try to restate, to be sure I have understood correctly:
We cannot stop once we have exhausted the evidence because explanations of equal predictive power have different ontological implications, and these implications must be accounted for in determining the best explanation. Further, we don’t have a way to exclude other ontological implications we have not considered.
Question: why don’t the ontological implications of our method of analysis constrain us to observing explanations with similar ontological implications?
Maybe they can[*], but it is not exactly a good thing...if you stick to one method of analysis, you will be in an echo chamber.
[*}An example might be the way reality looks mathematical to physics, which some people are willing to take fairly literally.
Echo chamber implies getting the same information back.
It would be more accurate to say we will inevitably reach a local maxima. Awareness of the ontological implications should be a useful tool in helping us recognize when we are there and which way to go next.
Without pursuing the analysis to its maximal conclusions, how can we distinguish the merits of different ontologies?
Without having a way of ranging across ontologyspace, how can we distinguish the merits of different ontologies? But we don’t have such a way. In its absence, we can pursue an ontology to the point of breakdown, whereupon we have no clear path onwards. It can also be a slow of process … it took centuries for scholastic philosophers to reach that point with the Aristotelian framework.
Alternatively, if an ontology works, that is no proof that it ia the best possible ontology, or the final answer...again because of the impossibility of crawling across ontologyspace.
This sounds strongly like we have no grounds for considering ontology at all when determining what the best possible explanation.
We cannot qualitatively distinguish between ontologies, except through the other qualities we were already examining.
We don’t have a way of searching for new ontologies.
So it looks like all we have done is go from best possible explanation to best available explanation where some superior explanation occupies a space of almost-zero in our probability distribution.
If that is supposed to mean that every ontology comes with its own isolated, tailor-made criteria, and that there are no others … then I dont think the situation is quite that bad: it;’s partly true, but there are also criteria that span ontologies, like parsimony.
The point is that we don’t have a mechanical, algorithmic way of searching for new ontologies. (It’s a very lesswronging piece of thinking to suppose that means there is no way at all). Clearly , we come up with new ontologies from time to time. In the absence of an algorithm for constructing ontologies, doing so is more of a createive process, and in the absence of algorithmic criteria for evaluating them, doing so is more like an aesthetic process.
My overall points are that
1) Philosophy is genuinely difficult..its failure to churn out results rapidly isn’t due to a boneheaded refusal to adopt some one-size-fits all algorithm such as Bayes...
2) … because there is currently no algorithm that covers everything you would want to do.
it’s a one word difference, but it’s very significant difference in terms of implications. For instance, we can;t quantify how far the best available explanation is from the best possible explanation. That can mean that the use of probablistic reasoning does’t go far enough.
I mean to say we are not ontologically motivated. The examples OP gave aren’t ontological questions, only questions with ontological implications, which makes the ontology descriptive rather than prescriptive. That the implications carry forward only makes the description consistent.
In the scholastic case, my sense of the process of moving beyond Aristotle is that it relied on things happening that disagreed with Aristotle, which weren’t motivated by testing Aristotle. Architecture and siege engines did for falling objects, for example.
I agree with your points. I am now experiencing some disquiet about how slippery the notion of ‘best’ is. I wonder how one would distinguish whether it was undefinable or not.
Who’s “we”? Lesswrongians seem pretty motivated to assert the correctness of physicalism and wrongness of dualism, supernaturalism,, etc.
I’m not following that. Can you give concrete examples?
What I had in mind was Aristotelean metaphysics, not Aristotelean physics. The metaphysics, the accident/essence distinction and so on, failed separately.