If one were to believe there is only one thing that agents ought to maximise could this be used as a way to translate agents that actually maximise another thing as maximising “the correct thing” but with false beliefs?
Not easily. It’s hard to translate a u-maximiser for complex u, into, say, a u-minimiser, without redefining the entire universe.
Not easily. It’s hard to translate a u-maximiser for complex u, into, say, a u-minimiser, without redefining the entire universe.