Robin, remember I have to build a damn AI out of this theory, at some point. A self-modifying AI that begins anticipating dynamic inconsistency—that is, a conflict of preference with its own future self—will not stay in such a state for very long… did the game theorists and economists work a standard answer for what happens after that?
If you like, you can think of me as defining the word “rationality” to refer to a different meaning—but I don’t really have the option of using the standard theory, here, at least not for longer than 50 milliseconds.
If there’s some nonobvious way I could be wrong about this point, which seems to me quite straightforward, do let me know.
Robin, remember I have to build a damn AI out of this theory, at some point. A self-modifying AI that begins anticipating dynamic inconsistency—that is, a conflict of preference with its own future self—will not stay in such a state for very long… did the game theorists and economists work a standard answer for what happens after that?
If you like, you can think of me as defining the word “rationality” to refer to a different meaning—but I don’t really have the option of using the standard theory, here, at least not for longer than 50 milliseconds.
If there’s some nonobvious way I could be wrong about this point, which seems to me quite straightforward, do let me know.