First, given an agent and an action which has been optimized for undesirable consequence Y, we’d like to be able to tell that the action has this undesirable side effect. I think we can do this by having a smarter agent act as an overseer, and giving the smarter agent suitable insight into the cognition of the weaker agent (e.g. by sharing weights between the weak agent and an explanation-generating agent). This is what I’m calling informed oversight.
Second, given an agent, identify situations in which it is especially likely to produce bad outcomes, or proofs that it won’t, or enough understanding of its internals that you can see why it won’t. This is discussed in “Techniques for Optimizing Worst-Case Performance.”
Paul, I’m curious whether you’d see as necessary for these techniques to work to have that the optimization target is pretty good/safe (but not perfect): ie some safety comes from the fact that the agents optimized for approval or imitation only have a limited class of Y’s that they might also end up being optimized for.
Paul, I’m curious whether you’d see as necessary for these techniques to work to have that the optimization target is pretty good/safe (but not perfect): ie some safety comes from the fact that the agents optimized for approval or imitation only have a limited class of Y’s that they might also end up being optimized for.
I don’t think so, but I’m not sure I understand exactly what you mean.