It seems like you assume that there’s a true platonic form that corresponds to the word belief instead of different models of belief being useful in different contexts. It’s unclear to me why one would believe that to be the case.
An understanding of part of reality.
It’s very unclear in what way this is supposed to be helpful. What understandings are there that are not understandings of “part of reality”? Without making that clear you just exchanged the word belief with understanding without having moved forward at all.
Whether or not beliefs come in degrees is an empirical question. I’m very skeptical of pretending that it is no empirical question.
Generally, in science, you solve an empirical question like this by having an operationalized definition for your concept and then measuring in reality what that concept does. Leverage Research for example developed with belief reporting a tool that operationalized beliefs and you can empirically study those beliefs.
There are also some hypnosis tools that can be used to make interesting experiments about what effects changing beliefs has.
That is, I believe that these biases are simply logically inevitable features of reasoning for any form of intelligence.
If what you think what you are saying applies to all intelligences, do you hold that GPT-3 has beliefs? If so, it might make sense to focus on it, given that it’s a lot easier to experiment with it than with humans.
It seems like you assume that there’s a true platonic form that corresponds to the word belief instead of different models of belief being useful in different contexts. It’s unclear to me why one would believe that to be the case.
It’s very unclear in what way this is supposed to be helpful. What understandings are there that are not understandings of “part of reality”? Without making that clear you just exchanged the word belief with understanding without having moved forward at all.
Whether or not beliefs come in degrees is an empirical question. I’m very skeptical of pretending that it is no empirical question.
Generally, in science, you solve an empirical question like this by having an operationalized definition for your concept and then measuring in reality what that concept does. Leverage Research for example developed with belief reporting a tool that operationalized beliefs and you can empirically study those beliefs.
There are also some hypnosis tools that can be used to make interesting experiments about what effects changing beliefs has.
If what you think what you are saying applies to all intelligences, do you hold that GPT-3 has beliefs? If so, it might make sense to focus on it, given that it’s a lot easier to experiment with it than with humans.