Agreed. But the observed slowing down (since, say, a century ago) in the rate of the paradigm shifts that are sometimes caused by things like discovering a new particle does suggest that out current ontology is now a moderately good fit to a fairly large slice of the world. And, I would claim, it is particularly likely to be fairly good fit for the problem of pointing to human values.
We also don’t require that our ontology fits the AI’s ontology, only that when we point to something in our ontology, it knows what we mean — something that basically happens by construction in an LLM, since the entire purpose that it’s ontology/world-model was learned for was figuring out what we mean and may say next. We may have trouble interpreting its internals, but it’s a trained expert in interpreting our natural languages.
It is of course possible that our ontology still contains invalid concepts comparable to “do animals have souls”? My claim is just that this is less likely now than it was in the 18th century, because we’ve made quite a lot of progress in understanding the world since then. Also, if it did, an LLM would still know all about this invalid concept and our beliefs about it, just like it knows all about our beliefs about things like vampires, unicorns, or superheroes.
Agreed. But the observed slowing down (since, say, a century ago) in the rate of the paradigm shifts that are sometimes caused by things like discovering a new particle does suggest that out current ontology is now a moderately good fit to a fairly large slice of the world
How can you tell? Again, you only have a predictive model. There is no way of measuring ontological fit directly.
Directly, no. But the process of science (like any use of Bayesian reasoning) is intended to gradually make our ontology a better fit to more of reality. If that was working as intended, then we would expect it to come to require more and more effort to produce the evidence needed to cause a significant further paradigm shift across a significant area of science, because there are fewer and fewer major large-scale misconceptions left to fix. Over the last century, we have more and more people working as scientists, publishing more and more papers, yet the rate of significant paradigm shifts that have an effect across a significant area of science has been dropping. From which I deduce that it is likely that our ontology is a probably a significantly better fit to reality now than it was a century ago, let alone three centuries ago back in the 18th century as this post discusses. Certainly the size and detail of our scientific ontology have both increased dramatically.
Is this proof? No, as you correctly observe, proof would require knowing the truth about reality. It’s merely suggestive supporting evidence. It’s possible to contrive other explanations: it’s also possible, if rather unlikely, that, for some reason (perhaps related to social or educational changes) all of those people working in science now are much stupider, more hidebound, or less original thinkers than the scientists a century ago, and that’s why dramatic paradigm shifts are slower — but personally I think this is very unlikely.
It is also quite possible that this is more true in certain areas of science that are amenable to the mental capabilities and research methods of human researchers, and that there might be other areas that were resistant to these approaches (so our lack of progress in these areas is caused by inability, not us approaching our goal), but where the different capabilities of an AI might allow it to make rapid progress. In such an area, the AI’s ontology might well be a significantly better fit to reality than ours.
Directly, no. But the process of science (like any use of Bayesian reasoning) is intended to gradually make our ontology a better fit to more of reality
Yes, ,it is intended to. Whether , and how it works , are other questions.
There’s also nothing about Bayesianism that guarantees incrementally better ontological fit, in addition to incrementally improving predictive power.
It’s about one of the things “truth” means. If you want to apply it to ontology, you need a kind of evidence that’s relevant to ontology—that can distinguish hypotheses that make similar predictions.
Correct me if I’m wrong, but I think we could apply the concept of logical uncertainty to metaphysics and then use Bayes’ theorem to update depending on where our metaphysical research takes us, the way we can use it to update the probability of logically necessarily true/false statements.
Agreed. But the observed slowing down (since, say, a century ago) in the rate of the paradigm shifts that are sometimes caused by things like discovering a new particle does suggest that out current ontology is now a moderately good fit to a fairly large slice of the world. And, I would claim, it is particularly likely to be fairly good fit for the problem of pointing to human values.
We also don’t require that our ontology fits the AI’s ontology, only that when we point to something in our ontology, it knows what we mean — something that basically happens by construction in an LLM, since the entire purpose that it’s ontology/world-model was learned for was figuring out what we mean and may say next. We may have trouble interpreting its internals, but it’s a trained expert in interpreting our natural languages.
It is of course possible that our ontology still contains invalid concepts comparable to “do animals have souls”? My claim is just that this is less likely now than it was in the 18th century, because we’ve made quite a lot of progress in understanding the world since then. Also, if it did, an LLM would still know all about this invalid concept and our beliefs about it, just like it knows all about our beliefs about things like vampires, unicorns, or superheroes.
How can you tell? Again, you only have a predictive model. There is no way of measuring ontological fit directly.
Directly, no. But the process of science (like any use of Bayesian reasoning) is intended to gradually make our ontology a better fit to more of reality. If that was working as intended, then we would expect it to come to require more and more effort to produce the evidence needed to cause a significant further paradigm shift across a significant area of science, because there are fewer and fewer major large-scale misconceptions left to fix. Over the last century, we have more and more people working as scientists, publishing more and more papers, yet the rate of significant paradigm shifts that have an effect across a significant area of science has been dropping. From which I deduce that it is likely that our ontology is a probably a significantly better fit to reality now than it was a century ago, let alone three centuries ago back in the 18th century as this post discusses. Certainly the size and detail of our scientific ontology have both increased dramatically.
Is this proof? No, as you correctly observe, proof would require knowing the truth about reality. It’s merely suggestive supporting evidence. It’s possible to contrive other explanations: it’s also possible, if rather unlikely, that, for some reason (perhaps related to social or educational changes) all of those people working in science now are much stupider, more hidebound, or less original thinkers than the scientists a century ago, and that’s why dramatic paradigm shifts are slower — but personally I think this is very unlikely.
It is also quite possible that this is more true in certain areas of science that are amenable to the mental capabilities and research methods of human researchers, and that there might be other areas that were resistant to these approaches (so our lack of progress in these areas is caused by inability, not us approaching our goal), but where the different capabilities of an AI might allow it to make rapid progress. In such an area, the AI’s ontology might well be a significantly better fit to reality than ours.
Yes, ,it is intended to. Whether , and how it works , are other questions.
There’s also nothing about Bayesianism that guarantees incrementally better ontological fit, in addition to incrementally improving predictive power.
Bayes’ theorem is about the truth of propositions. Why couldn’t it be applied to propositions about ontology?
It’s about one of the things “truth” means. If you want to apply it to ontology, you need a kind of evidence that’s relevant to ontology—that can distinguish hypotheses that make similar predictions.
Correct me if I’m wrong, but I think we could apply the concept of logical uncertainty to metaphysics and then use Bayes’ theorem to update depending on where our metaphysical research takes us, the way we can use it to update the probability of logically necessarily true/false statements.
How do we use Bayes to find kinds of truth other than predictiveness?
If you are dubious that the methods of rationality work, I fear you are on the wrong website.
I’m not saying they don’t work at all. I have no problem with prediction.
I notice that you didn’t tell me how the methods of rationality work in this particular case. Did you notice that I conceded that they work in others?
If this website is about believing things that cannot be proven, and have never been explained, then it is “rationalist” not rationalist.