I’m not convinced that sufficiently intelligent agents would create subagents with utility functions that lack terms of the original’s UF, at least with a suitable precaution. The example you used (an AI wanting to stay in the box letting out an agent to convert all box-hazards into raw material) seems as though the Boxed AI would want to ensure that the Unboxed Agent was Boxed-AI-Friendly. What would then happen if the Boxed AI had an unalterable belief that its utility function were likely to change in the future, and it couldn’t predict how?
Some formalized difference between intentional probability manipulation and unintentional/unpredicted but causally-related happenings would be nice. Minimized intentional impact would then be where an AI would not wish to effect actions on issues of great impact and defer to humans. I’m not sure how it would behave when a human then deferred to the AI. It seems like it would be a sub-CEV result, because the human would be biased, have scope insensitivity, prejudices, etc...And then it seems like the natural improvement would be to have the AI implement DWIM CEV.
Has much thought gone into defining utility functions piecewise, or multiplicatively wrt some epistemic probabilities? I’m not sure if I’m just reiterating corrigibility here, but say an agent has a utility function of some utility function U that equals U/P(“H”) + H*P(“H”), where P(“H”) is the likelihood that the Gatekeeper thinks the AI should be halted and H is the utility function rewarding halting and penalizing continuation. That was an attempt at a probabilistic piecewise UF that equals “if P(“H”) then H else U.”
Apologies for any incoherency, this is a straight-up brain dump.
I’m not convinced that sufficiently intelligent agents would create subagents with utility functions that lack terms of the original’s UF, at least with a suitable precaution. The example you used (an AI wanting to stay in the box letting out an agent to convert all box-hazards into raw material) seems as though the Boxed AI would want to ensure that the Unboxed Agent was Boxed-AI-Friendly. What would then happen if the Boxed AI had an unalterable belief that its utility function were likely to change in the future, and it couldn’t predict how?
Some formalized difference between intentional probability manipulation and unintentional/unpredicted but causally-related happenings would be nice. Minimized intentional impact would then be where an AI would not wish to effect actions on issues of great impact and defer to humans. I’m not sure how it would behave when a human then deferred to the AI. It seems like it would be a sub-CEV result, because the human would be biased, have scope insensitivity, prejudices, etc...And then it seems like the natural improvement would be to have the AI implement DWIM CEV.
Has much thought gone into defining utility functions piecewise, or multiplicatively wrt some epistemic probabilities? I’m not sure if I’m just reiterating corrigibility here, but say an agent has a utility function of some utility function U that equals U/P(“H”) + H*P(“H”), where P(“H”) is the likelihood that the Gatekeeper thinks the AI should be halted and H is the utility function rewarding halting and penalizing continuation. That was an attempt at a probabilistic piecewise UF that equals “if P(“H”) then H else U.”
Apologies for any incoherency, this is a straight-up brain dump.