How about something like this? I don’t expect this to work as stated, but it may suggest certain possibilities:
There is a familiarity score F which labels how close a situation is to one where humans have full and rapid understanding of what’s going on. In situations of high F, the human reward signals are take as accurate. There are examples of situations of medium F where humans, after careful deliberation, conclude that the reward signals were wrong. The prior is that for low F, there will be reward signals that are wrong but which even careful human deliberation cannot discern. The job of the learning algorithm is to deduce what these are by extending the results from medium F.
This should not converge merely onto human approval, since human approval is explicitly modelled to be false here.
This seems pretty similar to this proposal, does that seem right to you?
I think my main objection is the same as the main objection to the proposal I linked to: there has to be a good prior over “what the correct judgments are” such that when this prior is updated on data, it correctly generalizes to cases where we can’t get feedback even in principle. It’s not even clear what “correct judgments” means (you can’t put a human in a box and have them think for 500 years).
No exactly that. What I’m trying to get at is that we know some of the features that failure would have (eg edge cases of utility maximalisation, seductive-seeming or seductively-presented answer), so we should be able to use that knowledge somehow.
How about something like this? I don’t expect this to work as stated, but it may suggest certain possibilities:
There is a familiarity score F which labels how close a situation is to one where humans have full and rapid understanding of what’s going on. In situations of high F, the human reward signals are take as accurate. There are examples of situations of medium F where humans, after careful deliberation, conclude that the reward signals were wrong. The prior is that for low F, there will be reward signals that are wrong but which even careful human deliberation cannot discern. The job of the learning algorithm is to deduce what these are by extending the results from medium F.
This should not converge merely onto human approval, since human approval is explicitly modelled to be false here.
This seems pretty similar to this proposal, does that seem right to you?
I think my main objection is the same as the main objection to the proposal I linked to: there has to be a good prior over “what the correct judgments are” such that when this prior is updated on data, it correctly generalizes to cases where we can’t get feedback even in principle. It’s not even clear what “correct judgments” means (you can’t put a human in a box and have them think for 500 years).
No exactly that. What I’m trying to get at is that we know some of the features that failure would have (eg edge cases of utility maximalisation, seductive-seeming or seductively-presented answer), so we should be able to use that knowledge somehow.