Tarleton, unless you make a mistake (carry out an action you would not carry out given full information and comprehension regarding its consequences) you cannot, by definition, modify your own “frame of reference” any more than you can modify your own priors. What argument is it that caused you to want to change your frame of reference? This, by definition, is part of your frame of reference.
It should be understood that I am simply defining what I mean by “frame of reference”, not making any empirical argument—any statement about real-world consequences for self-modifying minds—which of course cannot carry by the mere meaning of words.
I agree that things get hairy when you have the ability to modify the cognitive representation of your utility function—this is one of my research areas. However, it seems probable that most simple agents will not want to modify their optimization targets, for the same reason Gandhi doesn’t want to swallow a pill that will make him regard murder as a moral good. Our own moral struggles indicate that we have a rather complicated frame of reference, required to make any moral question nontrivial!
Incidentally, any professional philosophers interested in writing about self-modifying moral agents, do please get in touch with me.
Tarleton, unless you make a mistake (carry out an action you would not carry out given full information and comprehension regarding its consequences) you cannot, by definition, modify your own “frame of reference” any more than you can modify your own priors. What argument is it that caused you to want to change your frame of reference? This, by definition, is part of your frame of reference.
It should be understood that I am simply defining what I mean by “frame of reference”, not making any empirical argument—any statement about real-world consequences for self-modifying minds—which of course cannot carry by the mere meaning of words.
I agree that things get hairy when you have the ability to modify the cognitive representation of your utility function—this is one of my research areas. However, it seems probable that most simple agents will not want to modify their optimization targets, for the same reason Gandhi doesn’t want to swallow a pill that will make him regard murder as a moral good. Our own moral struggles indicate that we have a rather complicated frame of reference, required to make any moral question nontrivial!
Incidentally, any professional philosophers interested in writing about self-modifying moral agents, do please get in touch with me.