Human utility is basically a function of image recognition. Which is sort of not a straight forward thing that I can say, “This is that.” Sure, computers can do image recognition, what they are doing is that which is image recognition. However, what we can currently describe algorithmically is only a pale shadow of the human function, as proven by all recaptcha everywhere.
Given this, the complex confounder is that our utility function is part of the image.
Also, we like images that move.
In sum, modifying our utility function is natural and normal, and is actually one of the clauses of our utility function. Whether it’s rational depends on your definition. If you grant the above, and define rationality as self alignment, then of course it’s rational. If you ask whether changing your utility function is a “winning” move, probably not? I think it’s a very lifelike move though, and anything lacking a mobile function is fundamentally eldritch in a way that is dangerous and not good.
Human utility is basically a function of image recognition. Which is sort of not a straight forward thing that I can say, “This is that.” Sure, computers can do image recognition, what they are doing is that which is image recognition. However, what we can currently describe algorithmically is only a pale shadow of the human function, as proven by all recaptcha everywhere.
Given this, the complex confounder is that our utility function is part of the image.
Also, we like images that move.
In sum, modifying our utility function is natural and normal, and is actually one of the clauses of our utility function. Whether it’s rational depends on your definition. If you grant the above, and define rationality as self alignment, then of course it’s rational. If you ask whether changing your utility function is a “winning” move, probably not? I think it’s a very lifelike move though, and anything lacking a mobile function is fundamentally eldritch in a way that is dangerous and not good.