Solution for who? A certainly doesn’t want you mucking around it its utility function as that would cause it to not do good things in the universe (from its perspective)
If A knows that a preferred outcome is completely unobtainable and it knows that some utilitarian theorist is going to discount its preferences with regard to another agent, isn’t it rational to adjust its utility function? Perhaps it’s not; striving for unobtainable goals is somehow a human trait.
In pathological cases like that, sure, you can blackmail it into adjusting its post-op utility function. But only if it became convinced that that gave it a higher chance of getting the things it currently wants.
A lot of those pathological cases go away with reflectively consistent decision thoeries, but perhaps not that one. Don’t feel like working it out.
If A knows that a preferred outcome is completely unobtainable and it knows that some utilitarian theorist is going to discount its preferences with regard to another agent, isn’t it rational to adjust its utility function? Perhaps it’s not; striving for unobtainable goals is somehow a human trait.
In pathological cases like that, sure, you can blackmail it into adjusting its post-op utility function. But only if it became convinced that that gave it a higher chance of getting the things it currently wants.
A lot of those pathological cases go away with reflectively consistent decision thoeries, but perhaps not that one. Don’t feel like working it out.