Nearest unblocked strategy versus learning patches
The nearest unblocked strategy problem (NUS) is the idea that if you program a restriction or a patch into an AI, then the AI will often be motivated to pick a strategy that is as close as possible to the banned strategy, and maybe just as dangerous.
For instance, if the AI is maximising a reward , and does some behaviour that we don’t like, we can patch the AI’s algorithm with patch (‘maximise subject to these constraints...’), or modify to so that doesn’t come up. I’ll focus more on the patching example, but the modified reward one is similar.
The problem is was probably a high value behaviour according to -maximising, simply because the AI was attempting it in the first place. So there are likely to be high value behaviours ‘close’ to , and the AI is likely to follow them.
A simple example
Consider a cleaning robot that rushes through its job an knocks over a white vase.
Then we can add patch : “don’t break any white vases”.
Next time the robot acts, it breaks a blue vase. So we add : “don’t break any blue vases”.
The robots next few run-throughs result in more patches: : “don’t break any red vases”, : “don’t break mauve-turquoise vases”, : “don’t break any black vases with cloisonné enammel”...
Learning the restrictions
Obviously the better thing for the robot to do would be just to avoid breaking vases. So instead of giving the robot endless patches, we could try and instead give it patches … and have it learn: “what is the general behaviour that these patches are trying to proscribe? Maybe I shouldn’t break any vases.”
Note that even a single patch would require an amount of learning, as you are trying to proscribe breaking white vases, at all times, in all locations, in all types of lighting, etc...
The idea is similar to that mentioned in the post on emergency learning, trying to have the AI generalise the idea of restricted behaviour from examples (=patches), rather than having to define all the examples.
A complex example
The vase example is obvious, but ideally we’d hope to generalise it. We’d hope to have the AI take patches like:
#. : “Don’t break vases.” #. : “Don’t vacuum the cat.” #. : “Don’t use bleach on paintings.” #. : “Don’t obey human orders when the human is drunk.” #. …
And then have the AI infer very different restrictions, like “Don’t imprison small children.”
Can this be done? Can we get a sufficient depth of example patches that most other human-desired patches can be learnt or deduced? And can we do this without the AI simply learning “Manipulate the human”? This is one of the big questions for methods like reward learning.
Is there a reason why this won’t just converge to maximizing human approval?
My original idea was to identify actions that increased human approval but were specifically labelled as not legitimate, and having the AI generalise those as a priority, but the idea needs work.
The thing I’m pointing at is that the generalization you should expect by default is “things are legitimate iff a human would label them as legitimate”, which is a kind of approval. It seems like you need some kind of special sauce if you want the generalization to be something other than that.
How about something like this? I don’t expect this to work as stated, but it may suggest certain possibilities:
There is a familiarity score F which labels how close a situation is to one where humans have full and rapid understanding of what’s going on. In situations of high F, the human reward signals are take as accurate. There are examples of situations of medium F where humans, after careful deliberation, conclude that the reward signals were wrong. The prior is that for low F, there will be reward signals that are wrong but which even careful human deliberation cannot discern. The job of the learning algorithm is to deduce what these are by extending the results from medium F.
This should not converge merely onto human approval, since human approval is explicitly modelled to be false here.
This seems pretty similar to this proposal, does that seem right to you?
I think my main objection is the same as the main objection to the proposal I linked to: there has to be a good prior over “what the correct judgments are” such that when this prior is updated on data, it correctly generalizes to cases where we can’t get feedback even in principle. It’s not even clear what “correct judgments” means (you can’t put a human in a box and have them think for 500 years).
No exactly that. What I’m trying to get at is that we know some of the features that failure would have (eg edge cases of utility maximalisation, seductive-seeming or seductively-presented answer), so we should be able to use that knowledge somehow.
You might be able to sample pairs (a,f(a)) without being able to actually evaluate a↦f(a).
I’m not sure why this would change things? If it samples a bunch of (a,f(a)) pairs until one of them has a high f(a) value, then that’s going to have the same effect as sampling a bunch of a values until one of them has a high f(a) value.
I meant that I may be able to sample pairs from some attack distribution without being able to harden my function against the attack distribution.
Suppose that I have a program ˜f∈[0,1] which implements my desired reward function, except that it has a bunch of vulnerabilities ˜ai on which it mistakenly outputs 1 (when it really should output 0). Suppose further that I am able to sample vulnerabilities roughly as effectively as my AI.
Then I can sample vulnerabilities ˜a and provide the pairs (˜a,−1) to train my reward function, along with a bunch of pairs (a,˜f(a)) for actions a produced by the agent. This doesn’t quite work as stated but you could imagine learning f despite having no access to it.
(This is very similar to adversarial training / red teams).