In pathological cases like that, sure, you can blackmail it into adjusting its post-op utility function. But only if it became convinced that that gave it a higher chance of getting the things it currently wants.
A lot of those pathological cases go away with reflectively consistent decision thoeries, but perhaps not that one. Don’t feel like working it out.
In pathological cases like that, sure, you can blackmail it into adjusting its post-op utility function. But only if it became convinced that that gave it a higher chance of getting the things it currently wants.
A lot of those pathological cases go away with reflectively consistent decision thoeries, but perhaps not that one. Don’t feel like working it out.