[...] I cannot with current technology radically alter myself [...] Ideal-Nathan can and does—unless it puts a strong disutility on such an action, which means that I myself put a strong disutility on such an action.
That’s a mistake: you are not him. You make your own decisions. If you value following the ideal-self-modifying-you, that’s fine, but I don’t believe that’s in human nature, it’s only a declarative construction that doesn’t actually relate to your values. You may want to become the ideal-you, but that doesn’t mean that you want to follow the counterfactual actions of the ideal-you if you haven’t actually become one.
The ideal-potentially-self modifying me. No such being exists. I know, for a fact, that I am not perfectly rational in the sense that I construe “rational” to mean. That doesn’t mean that Omega couldn’t write a utility function that, if maximised, would perfectly describe my actions. Now in fact I am going to end up maximising that utility function: that’s just mathematics/physics. But I am structured so as to value “me”, even if “me” is just a concept I hold of myself. When I talk of ideal-Nathan, I mean a being that has the utility function that I think I have, which is not the same as the utility function that I do have. I then work out what ideal-Nathan does. If I find it does something that I know for a fact I do not want to do, then I’m simply mistaken about ideal-Nathan—I’m mistaken about my own utility function. That means that by considering the behaviour of ideal-Nathan (not looking so ideal now, is he?) I can occasionally discover something about myself. In this case I’ve discovered:
I don’t care about my past selves nearly as much as I thought I did
I place a stronger premium on not modifying myself in such a way as to find killing pleasurable than I do on human life itself.
That’s a mistake: you are not him. You make your own decisions. If you value following the ideal-self-modifying-you, that’s fine, but I don’t believe that’s in human nature, it’s only a declarative construction that doesn’t actually relate to your values. You may want to become the ideal-you, but that doesn’t mean that you want to follow the counterfactual actions of the ideal-you if you haven’t actually become one.
The ideal-potentially-self modifying me. No such being exists. I know, for a fact, that I am not perfectly rational in the sense that I construe “rational” to mean. That doesn’t mean that Omega couldn’t write a utility function that, if maximised, would perfectly describe my actions. Now in fact I am going to end up maximising that utility function: that’s just mathematics/physics. But I am structured so as to value “me”, even if “me” is just a concept I hold of myself. When I talk of ideal-Nathan, I mean a being that has the utility function that I think I have, which is not the same as the utility function that I do have. I then work out what ideal-Nathan does. If I find it does something that I know for a fact I do not want to do, then I’m simply mistaken about ideal-Nathan—I’m mistaken about my own utility function. That means that by considering the behaviour of ideal-Nathan (not looking so ideal now, is he?) I can occasionally discover something about myself. In this case I’ve discovered:
I don’t care about my past selves nearly as much as I thought I did
I place a stronger premium on not modifying myself in such a way as to find killing pleasurable than I do on human life itself.