“don’t kill an operator” seems like something that can more easily be encoded into an agent than “allow operators to correct things they consider undesirable when they notice them”.
In fact, even a perfectly corrigible agent with such a glaring initial flaw might kill the operator(s) before they can apply the corrections, not because they are resisting correction, but just because it furthers whatever other goals they may have.
You’re exactly right, I think. IMO it may actually be easier to build an AI that can learn to want what some target agent wants, than to build an AI that lets itself be interfered with by some operator whose goals don’t align with its own current goals.
“don’t kill an operator” seems like something that can more easily be encoded into an agent than “allow operators to correct things they consider undesirable when they notice them”.
In fact, even a perfectly corrigible agent with such a glaring initial flaw might kill the operator(s) before they can apply the corrections, not because they are resisting correction, but just because it furthers whatever other goals they may have.
You’re exactly right, I think. IMO it may actually be easier to build an AI that can learn to want what some target agent wants, than to build an AI that lets itself be interfered with by some operator whose goals don’t align with its own current goals.