That only works if the agent is motivated by something like “maximise your belief in what the expected value of U is”, rather than “maximise the expected value of U”. If you’ve got that problem, then the agent is unsalvageable—it could just edit its memory to make itself believe U is maximised.
Say w2a is the world where the agent starts in w2 and w2b is the world that results after the agent moves from w1 to w2.
Without considering the agent’s memory part of the world, it seems like the problem is worse: the only way to distinguish between w2a and w2b is the agent’s memory of past events, so it seems that leaving the agent’s memory over the past out of the utility function requires U(w2a) = U(w2b)
>erase it’s memory
That only works if the agent is motivated by something like “maximise your belief in what the expected value of U is”, rather than “maximise the expected value of U”. If you’ve got that problem, then the agent is unsalvageable—it could just edit its memory to make itself believe U is maximised.
Say w2a is the world where the agent starts in w2 and w2b is the world that results after the agent moves from w1 to w2.
Without considering the agent’s memory part of the world, it seems like the problem is worse: the only way to distinguish between w2a and w2b is the agent’s memory of past events, so it seems that leaving the agent’s memory over the past out of the utility function requires U(w2a) = U(w2b)
U could depend on the entire history of states (rather than on the agent’s memory of that history).
Ah, misunderstood that, thanks.