The key point is that a malign failure requires leveraging the intelligence of the model to do something actively bad. If our model is trained by gradient descent, its behavior can only be intelligent when it is exercised on the training distribution — if part of the model never (or very rarely) does anything on the training distribution, then that part of the model can’t be intelligent. So in some sense a malign failure mode needs to use a code path that gets run on the training distribution, just under different conditions that cause it to behave badly.
Here is how I would rephrase it:
Aligned or Benign Conjecture: Let A be a machine learning agent you are training with an aligned loss function. If A is in a situation that is too far out of distribution for it to be aligned, it won’t act intelligently either.
(Although I’m calling this a “conjecture”, it’s probably context dependent instead of being a single mathematical statement.)
This seems pretty plausible, but I’m not sure it’s guaranteed mathematically 🤔. (For example: A neural network could have subcomponents that are great at specific tasks, and such that putting A in an out-of-distribution situation does not put those subcomponents out of distribution.)
I’m wondering if there is an empirical evidence or theoretical arguments against this conjecture.
As an example, can we make a ML agent, trained with stochastic descent, that abandons it’s utility function out-of-distribution, but still has the same capabilities in some sense? For example, if the agent is fighting in an army, could an out-of-distribution environment cause it to defect to a different army, but still retain its fighting skills?
Is there a ML agent that abandons it’s utility function out-of-distribution without losing capabilities?
At Techniques for optimizing worst-case performance Paul Christiano says
Here is how I would rephrase it:
(Although I’m calling this a “conjecture”, it’s probably context dependent instead of being a single mathematical statement.)
This seems pretty plausible, but I’m not sure it’s guaranteed mathematically 🤔. (For example: A neural network could have subcomponents that are great at specific tasks, and such that putting A in an out-of-distribution situation does not put those subcomponents out of distribution.)
I’m wondering if there is an empirical evidence or theoretical arguments against this conjecture.
As an example, can we make a ML agent, trained with stochastic descent, that abandons it’s utility function out-of-distribution, but still has the same capabilities in some sense? For example, if the agent is fighting in an army, could an out-of-distribution environment cause it to defect to a different army, but still retain its fighting skills?