Thanks for that explanation of mental stuff. My opinion? Sounds implausible, but fine, in the sense that we shouldn’t build our AI in a way that makes it incapable of considering that hypothesis. As an aside, I think it is less plausible than idealism, because it lacks the main cluster of motivations for idealism. The whole point of idealism is to be monist (and thus achieve ontological parsimony) whilst also “taking consciousness seriously.” As seriously as possible, in fact. Perhaps more seriously than is necessary, but anyhow that’s the appeal. Morality fluid takes morals seriously (maybe? Maybe not, actually, given your construction) but it doesn’t take consciousness any more seriously than physicalism, it seems. And, I think, it is more important that our theories take consciousness seriously than that they take morality seriously.
I suspect ‘a general open-mindedness towards considering different ontologies’ can’t be formalized, or can’t be both formalized and humanly vetted.
Humans do it. If intelligent humans can consider a hypothesis, an AI should be able to as well. In most cases it will quickly realize the hypothesis is silly or even self-contradictory, but at least it should be able to give them an honest try, rather than classify them as nonsense from the beginning.
At a minimum, we’ll need to decide what gets to count as an ‘ontology’, which means drawing the line somewhere and declaring everything outside a certain set of boundaries nonsensical.
Doesn’t seem to difficult to me. It isn’t really an ontology/nonontology distinction we are looking for, but a “hypothesis about the lowest level of description of the world / not that” distinction. Since the hypothesis itself states whether or not it is about the lowest level of description of the world, really all this comes down to is the distinction between a hypothesis and something other than a hypothesis. Right?
My general idea is, we don’t want to make our AI more limited than ourselves. In fact, we probably want our AI to reason “as we wish we ourselves would reason.” You don’t wish you were incapable of considering idealism, do you? If you do, why?
Thanks for that explanation of mental stuff. My opinion? Sounds implausible, but fine, in the sense that we shouldn’t build our AI in a way that makes it incapable of considering that hypothesis. As an aside, I think it is less plausible than idealism, because it lacks the main cluster of motivations for idealism. The whole point of idealism is to be monist (and thus achieve ontological parsimony) whilst also “taking consciousness seriously.” As seriously as possible, in fact. Perhaps more seriously than is necessary, but anyhow that’s the appeal. Morality fluid takes morals seriously (maybe? Maybe not, actually, given your construction) but it doesn’t take consciousness any more seriously than physicalism, it seems. And, I think, it is more important that our theories take consciousness seriously than that they take morality seriously.
Humans do it. If intelligent humans can consider a hypothesis, an AI should be able to as well. In most cases it will quickly realize the hypothesis is silly or even self-contradictory, but at least it should be able to give them an honest try, rather than classify them as nonsense from the beginning.
Doesn’t seem to difficult to me. It isn’t really an ontology/nonontology distinction we are looking for, but a “hypothesis about the lowest level of description of the world / not that” distinction. Since the hypothesis itself states whether or not it is about the lowest level of description of the world, really all this comes down to is the distinction between a hypothesis and something other than a hypothesis. Right?
My general idea is, we don’t want to make our AI more limited than ourselves. In fact, we probably want our AI to reason “as we wish we ourselves would reason.” You don’t wish you were incapable of considering idealism, do you? If you do, why?