First, a disclaimer: I do think there are “beliefs” that most intelligent/capable minds will have in practice. E.g. I suspect most will use something like modus ponens, most will update beliefs in accordance with statistical evidence in certain ways, etc. I think it’s possible for a mind to be intelligent/capable without strictly adhering to those things, but for sure I think there will be a correlation in practice for many “beliefs”.
Questions I ask myself are:
Would it be impossible (in theory) to wire together a mind/program with “belief”/behavior x, and having that mind be very capable at most mental tasks?
Would it be infeasible (for humans) to wire together a mind/program with “belief”/behavior x, and having that mind be very capable at most mental tasks?
And in the case of e.g. caring about “goals” I don’t see good reasons to think that the answer is “no”.
Like, I think it is physically and practically possible to make minds that act in ways that I would consider “completely stupid”, while still being extremely capable at most mental tasks.
Another thing I sometimes ask myself:
“Is it possible for an intelligent program to surmise what another intelligent mind would do if it had goal/preferences/optimization-target x?”
“Would it be possible for another program to ask about #1 as a question, or fetch that info from the internals of another program?”
If yes and yes, then a program could be written where #2 surmised from #1 what such a mind would do (with goal/preferences/optimization-target x), and carries out that thing.
I could imagine information that would make me doubt my opinion / feel confused, but nothing that is easy to summarize. (I would have to be wrong about several things—not just one.)
About universally compelling arguments?
First, a disclaimer: I do think there are “beliefs” that most intelligent/capable minds will have in practice. E.g. I suspect most will use something like modus ponens, most will update beliefs in accordance with statistical evidence in certain ways, etc. I think it’s possible for a mind to be intelligent/capable without strictly adhering to those things, but for sure I think there will be a correlation in practice for many “beliefs”.
Questions I ask myself are:
Would it be impossible (in theory) to wire together a mind/program with “belief”/behavior x, and having that mind be very capable at most mental tasks?
Would it be infeasible (for humans) to wire together a mind/program with “belief”/behavior x, and having that mind be very capable at most mental tasks?
And in the case of e.g. caring about “goals” I don’t see good reasons to think that the answer is “no”.
Like, I think it is physically and practically possible to make minds that act in ways that I would consider “completely stupid”, while still being extremely capable at most mental tasks.
Another thing I sometimes ask myself:
“Is it possible for an intelligent program to surmise what another intelligent mind would do if it had goal/preferences/optimization-target x?”
“Would it be possible for another program to ask about #1 as a question, or fetch that info from the internals of another program?”
If yes and yes, then a program could be written where #2 surmised from #1 what such a mind would do (with goal/preferences/optimization-target x), and carries out that thing.
I could imagine information that would make me doubt my opinion / feel confused, but nothing that is easy to summarize. (I would have to be wrong about several things—not just one.)