Even worse, the (question, prediction, r=0) datapoints in the history could get so misleading that the oracle never learns anything.
Why do we have to give the oracle a zero reward for the non-erasure episodes? Why not just skip the learning/update step for those episodes?
In short, I think the CDT setup without machine learning assumes the problem already solved, that online learning won’t work and is unsafe, which leaves us with a “training then deployment” setup that isn’t really original.
Why not do a combination of pre-training and online learning, where you do enough during the training phase to get a useful predictor, and then use online learning to deal with subsequent distributional shifts?
Although I guess that probably isn’t really original either. What seems original is that during any episode where learning will take place, don’t let humans (or any other system that might be insecure against the oracle) see the oracle’s output until the episode is over.
The zero reward is in the paper. I agree that skipping would solve the problem. From talking to Stuart, my impression is that he thinks that r=0 would be equivalent to skipping for specifying “no learning”, or would just slow down learning. My disagreement on that I think it can confuse learning to the point of not learning the right thing.
Why not do a combination of pre-training and online learning, where you do enough during the training phase to get a useful predictor, and then use online learning to deal with subsequent distributional shifts?
Yes, that should work. My quote saying that online learning “won’t work and is unsafe” is imprecise. I should have said “if epsilon is small enough to be comparable to the probability of shooting an escape message at random, then it is not safe. Also, if we continue sending the wrong r=0 instead of skipping, then it might not learn the correct thing if ϵ is not big enough”.
Although I guess that probably isn’t really original either. What seems original is that during any episode where learning will take place, don’t let humans (or any other system that might be insecure against the oracle) see the oracle’s output until the episode is over.
Why do we have to give the oracle a zero reward for the non-erasure episodes? Why not just skip the learning/update step for those episodes?
Why not do a combination of pre-training and online learning, where you do enough during the training phase to get a useful predictor, and then use online learning to deal with subsequent distributional shifts?
Although I guess that probably isn’t really original either. What seems original is that during any episode where learning will take place, don’t let humans (or any other system that might be insecure against the oracle) see the oracle’s output until the episode is over.
The zero reward is in the paper. I agree that skipping would solve the problem. From talking to Stuart, my impression is that he thinks that r=0 would be equivalent to skipping for specifying “no learning”, or would just slow down learning. My disagreement on that I think it can confuse learning to the point of not learning the right thing.
Yes, that should work. My quote saying that online learning “won’t work and is unsafe” is imprecise. I should have said “if epsilon is small enough to be comparable to the probability of shooting an escape message at random, then it is not safe. Also, if we continue sending the wrong r=0 instead of skipping, then it might not learn the correct thing if ϵ is not big enough”.
That’s exactly it!