I think there is an issue with the utility indifference framework that has not been mentioned in the paper and in the comments so far:
If the agent is able to affect with its action a1 the probability of the shutdown button being pressed, that is, a1 can be a manipulative action, and if the agent is indifferent to the button being pressed, then it may happen that it “accidentally” performs a manipulative action.
E.g. assume that the users (the programmers) would use a remote controlled robotic arm to press the shutdown button. If the agents turns out to be a paperclipper, it may disassemble the robotic arm just to turn it into paperclips. The agent is not “intentionally” trying to resist shutdown, but the effect will be the same. Symmetrically there could be scenarios where the agent “accidentally” presses the shutdown button itself.
If I understand correctly, UN is already supposed to penalize manipulative actions, but UN is untrusted, hence the problem still exist. Corrigibility implemented using utility indifference might make sense as a precaution, but it is not foolproof.
E.g. assume that the users (the programmers) would use a remote controlled robotic arm to press the shutdown button. If the agents turns out to be a paperclipper, it may disassemble the robotic arm just to turn it into paperclips. The agent is not “intentionally” trying to resist shutdown, but the effect will be the same. Symmetrically there could be scenarios where the agent “accidentally” presses the shutdown button itself.
Yep! In fact, this is exactly the problem discussed in section 4.1 and described in Theorem 6, is it not?
Section 4.1 frames the problem in terms of the agent creating a sub-agent or successor. My point is that the issue is more general, as there are manipulative actions that don’t involve creating other agents. Theorem 6 seems to address the general case, although I would remark that even if epsilon == 0 (that is, even UN is indifferent to manipulation) you aren’t safe.
I think there is an issue with the utility indifference framework that has not been mentioned in the paper and in the comments so far:
If the agent is able to affect with its action a1 the probability of the shutdown button being pressed, that is, a1 can be a manipulative action, and if the agent is indifferent to the button being pressed, then it may happen that it “accidentally” performs a manipulative action.
E.g. assume that the users (the programmers) would use a remote controlled robotic arm to press the shutdown button. If the agents turns out to be a paperclipper, it may disassemble the robotic arm just to turn it into paperclips. The agent is not “intentionally” trying to resist shutdown, but the effect will be the same. Symmetrically there could be scenarios where the agent “accidentally” presses the shutdown button itself.
If I understand correctly, UN is already supposed to penalize manipulative actions, but UN is untrusted, hence the problem still exist.
Corrigibility implemented using utility indifference might make sense as a precaution, but it is not foolproof.
Yep! In fact, this is exactly the problem discussed in section 4.1 and described in Theorem 6, is it not?
Section 4.1 frames the problem in terms of the agent creating a sub-agent or successor. My point is that the issue is more general, as there are manipulative actions that don’t involve creating other agents.
Theorem 6 seems to address the general case, although I would remark that even if epsilon == 0 (that is, even UN is indifferent to manipulation) you aren’t safe.