Would an AI believe itself to have free will? Without free will, it is—imo—difficult to accept that moral agents exist as currently thought of. (This is my contention.) It might, of course, construct the idea of a moral agent a bit differently, or agree with those who see free will as irrelevent to the idea of moral agents. It is also possible that it might see itself as a moral agent but not see humans as such (rather how we do with animals). It might still see as worthy of moral consideration, however.
Interesting. I have not looked at things like this before. I am not sure that I am smart enough or knowledgeable enough to understand the MIRI stuff or your own paper, at least not on a first reading.
Would an AI believe itself to have free will? Without free will, it is—imo—difficult to accept that moral agents exist as currently thought of. (This is my contention.) It might, of course, construct the idea of a moral agent a bit differently, or agree with those who see free will as irrelevent to the idea of moral agents. It is also possible that it might see itself as a moral agent but not see humans as such (rather how we do with animals). It might still see as worthy of moral consideration, however.
Reconciling free will with physics is a basic part of the decision theory problem. See MIRI work on the topic and my own theoretical write-up.
Interesting. I have not looked at things like this before. I am not sure that I am smart enough or knowledgeable enough to understand the MIRI stuff or your own paper, at least not on a first reading.