It seems pointless to me to debate about things like pedophelia (a human male over a certain age attracted to human females under a certain age) in an era where the very concepts of ‘male’, ‘female’ and ‘human’ are likely to be extremely different from what we currently hold them to be. But for the sake of argument, let’s assume that somehow, we have a society that is magically able to erase psychological problems but everything seems to be pretty much the same as it is today, up to and including security checks (!) for boarding airplanes (!!!).
I’m not sure what is being asked here. Are you asking if agents are capable of modifying themselves and authorizing the modification of their utility functions? Of course they are. An agent’s goal system does not have laser-guided precision, and it’s quite possible for an agent to mistakenly take an action that actually decreases its utility function. This need not be a ‘failure of rationality’. It’s just that computing all possible consequences of one’s actions is impossible (in the strong sense).
If you’re asking why someone would want to or would not want to remove pedophilia from his brain, it could be due to the other effects (social ostracization, etc.) it has on his utility function.
It seems pointless to me to debate about things like pedophelia (a human male over a certain age attracted to human females under a certain age) in an era where the very concepts of ‘male’, ‘female’ and ‘human’ are likely to be extremely different from what we currently hold them to be. But for the sake of argument, let’s assume that somehow, we have a society that is magically able to erase psychological problems but everything seems to be pretty much the same as it is today, up to and including security checks (!) for boarding airplanes (!!!).
I’m not sure what is being asked here. Are you asking if agents are capable of modifying themselves and authorizing the modification of their utility functions? Of course they are. An agent’s goal system does not have laser-guided precision, and it’s quite possible for an agent to mistakenly take an action that actually decreases its utility function. This need not be a ‘failure of rationality’. It’s just that computing all possible consequences of one’s actions is impossible (in the strong sense).
If you’re asking why someone would want to or would not want to remove pedophilia from his brain, it could be due to the other effects (social ostracization, etc.) it has on his utility function.