What about “I think therefore I am”? Isn’t it universally compelling argument?
Not even among the tiny tiny section of mind-space occupied by human minds:
Notice also that “I think therefore I am” is an is-statement (not an ought-statement / something a physical system optimizes towards).
As to me personally, I don’t disagree that I exist, but I see it as a fairly vague/ill-defined statement. And it’s not a logical necessity, even if we presume assumptions that most humans would share. Another logical possibility would be Boltzmann brains (unless a Boltzmann brain would qualify as “I”, I guess).
I argue that “no universally compelling arguments” is misleading.
You haven’t done that very much. Only, insofar as I can remember, through anthropomorphization, and reference to metaphysical ough-assumptions not shared by all/most possible minds (sometimes not even shared by the minds you are interacting with, despite these minds being minds that are capable of developing advanced technology).
First, a disclaimer: I do think there are “beliefs” that most intelligent/capable minds will have in practice. E.g. I suspect most will use something like modus ponens, most will update beliefs in accordance with statistical evidence in certain ways, etc. I think it’s possible for a mind to be intelligent/capable without strictly adhering to those things, but for sure I think there will be a correlation in practice for many “beliefs”.
Questions I ask myself are:
Would it be impossible (in theory) to wire together a mind/program with “belief”/behavior x, and having that mind be very capable at most mental tasks?
Would it be infeasible (for humans) to wire together a mind/program with “belief”/behavior x, and having that mind be very capable at most mental tasks?
And in the case of e.g. caring about “goals” I don’t see good reasons to think that the answer is “no”.
Like, I think it is physically and practically possible to make minds that act in ways that I would consider “completely stupid”, while still being extremely capable at most mental tasks.
Another thing I sometimes ask myself:
“Is it possible for an intelligent program to surmise what another intelligent mind would do if it had goal/preferences/optimization-target x?”
“Would it be possible for another program to ask about #1 as a question, or fetch that info from the internals of another program?”
If yes and yes, then a program could be written where #2 surmised from #1 what such a mind would do (with goal/preferences/optimization-target x), and carries out that thing.
I could imagine information that would make me doubt my opinion / feel confused, but nothing that is easy to summarize. (I would have to be wrong about several things—not just one.)
Not even among the tiny tiny section of mind-space occupied by human minds:
Notice also that “I think therefore I am” is an is-statement (not an ought-statement / something a physical system optimizes towards).
As to me personally, I don’t disagree that I exist, but I see it as a fairly vague/ill-defined statement. And it’s not a logical necessity, even if we presume assumptions that most humans would share. Another logical possibility would be Boltzmann brains (unless a Boltzmann brain would qualify as “I”, I guess).
You haven’t done that very much. Only, insofar as I can remember, through anthropomorphization, and reference to metaphysical ough-assumptions not shared by all/most possible minds (sometimes not even shared by the minds you are interacting with, despite these minds being minds that are capable of developing advanced technology).
What information would change your opinion?
About universally compelling arguments?
First, a disclaimer: I do think there are “beliefs” that most intelligent/capable minds will have in practice. E.g. I suspect most will use something like modus ponens, most will update beliefs in accordance with statistical evidence in certain ways, etc. I think it’s possible for a mind to be intelligent/capable without strictly adhering to those things, but for sure I think there will be a correlation in practice for many “beliefs”.
Questions I ask myself are:
Would it be impossible (in theory) to wire together a mind/program with “belief”/behavior x, and having that mind be very capable at most mental tasks?
Would it be infeasible (for humans) to wire together a mind/program with “belief”/behavior x, and having that mind be very capable at most mental tasks?
And in the case of e.g. caring about “goals” I don’t see good reasons to think that the answer is “no”.
Like, I think it is physically and practically possible to make minds that act in ways that I would consider “completely stupid”, while still being extremely capable at most mental tasks.
Another thing I sometimes ask myself:
“Is it possible for an intelligent program to surmise what another intelligent mind would do if it had goal/preferences/optimization-target x?”
“Would it be possible for another program to ask about #1 as a question, or fetch that info from the internals of another program?”
If yes and yes, then a program could be written where #2 surmised from #1 what such a mind would do (with goal/preferences/optimization-target x), and carries out that thing.
I could imagine information that would make me doubt my opinion / feel confused, but nothing that is easy to summarize. (I would have to be wrong about several things—not just one.)