The Asymptotically Unambitious AGI thread helped me clarify my thoughts, thanks. I agree that an optimal CDT agent won’t think about future versions, and I don’t see any optimization pressure towards escape message nor disproportionately common “escape message” regions.
However, it still assumes we have access to this magic oracle that optimizes for R′=R.IE where E is the event where humans don’t see the answer, IE its indicator function, and R the counterfactual reward (given by the automatic machine). If humans were able to build an oracle performing optimally (w.r.t R′) from day 1, then humans would be able to specify some kind of “god oracle”. The rest of the design seems to be just “how to interact with a god oracle so that humans are not influenced by the answers”.
In practice, you’ll want something that is able to learn from its (question, prediction, reward) history. That’s why there is this automatic machine rewarding the oracle with some probability ϵ. In an online learning setting, most of the time the model gets r=0 (probability 1−ϵ), and it sometimes gets some useful feedback r>0 (probability ϵ). Therefore, if ϵ is small enough (comparable to the probability of hitting an escape message at random), then the learning gets extremely slow and the oracle might shoot at the escape action at random. Even worse, the (question, prediction, r=0) datapoints in the history could get so misleading that the oracle never learns anything.
Now, if we’re not in an online learning process but instead there is a separation between a “training phase” and a “deployment phase where the AI continue to learns with probability ϵ”, then the setup is just “have a model that learns to do the useful stuff in sandbox, and then have the weights (almost) fixed in deployment”
In short, I think the CDT setup without machine learning assumes the problem already solved, that online learning won’t work and is unsafe, which leaves us with a “training then deployment” setup that isn’t really original.
Even worse, the (question, prediction, r=0) datapoints in the history could get so misleading that the oracle never learns anything.
Why do we have to give the oracle a zero reward for the non-erasure episodes? Why not just skip the learning/update step for those episodes?
In short, I think the CDT setup without machine learning assumes the problem already solved, that online learning won’t work and is unsafe, which leaves us with a “training then deployment” setup that isn’t really original.
Why not do a combination of pre-training and online learning, where you do enough during the training phase to get a useful predictor, and then use online learning to deal with subsequent distributional shifts?
Although I guess that probably isn’t really original either. What seems original is that during any episode where learning will take place, don’t let humans (or any other system that might be insecure against the oracle) see the oracle’s output until the episode is over.
The zero reward is in the paper. I agree that skipping would solve the problem. From talking to Stuart, my impression is that he thinks that r=0 would be equivalent to skipping for specifying “no learning”, or would just slow down learning. My disagreement on that I think it can confuse learning to the point of not learning the right thing.
Why not do a combination of pre-training and online learning, where you do enough during the training phase to get a useful predictor, and then use online learning to deal with subsequent distributional shifts?
Yes, that should work. My quote saying that online learning “won’t work and is unsafe” is imprecise. I should have said “if epsilon is small enough to be comparable to the probability of shooting an escape message at random, then it is not safe. Also, if we continue sending the wrong r=0 instead of skipping, then it might not learn the correct thing if ϵ is not big enough”.
Although I guess that probably isn’t really original either. What seems original is that during any episode where learning will take place, don’t let humans (or any other system that might be insecure against the oracle) see the oracle’s output until the episode is over.
Therefore, if epsilon is small enough (comparable to the probability of hitting an escape message at random), then the learning gets extremely slow and the oracle might shoot at the escape action at random.
The escape action being randomly called should not be a problem if it is a text string that is only read if r=1, and is ineffectual otherwise...
The Asymptotically Unambitious AGI thread helped me clarify my thoughts, thanks. I agree that an optimal CDT agent won’t think about future versions, and I don’t see any optimization pressure towards escape message nor disproportionately common “escape message” regions.
However, it still assumes we have access to this magic oracle that optimizes for R′=R.IE where E is the event where humans don’t see the answer, IE its indicator function, and R the counterfactual reward (given by the automatic machine). If humans were able to build an oracle performing optimally (w.r.t R′) from day 1, then humans would be able to specify some kind of “god oracle”. The rest of the design seems to be just “how to interact with a god oracle so that humans are not influenced by the answers”.
In practice, you’ll want something that is able to learn from its (question, prediction, reward) history. That’s why there is this automatic machine rewarding the oracle with some probability ϵ. In an online learning setting, most of the time the model gets r=0 (probability 1−ϵ), and it sometimes gets some useful feedback r>0 (probability ϵ). Therefore, if ϵ is small enough (comparable to the probability of hitting an escape message at random), then the learning gets extremely slow and the oracle might shoot at the escape action at random. Even worse, the (question, prediction, r=0) datapoints in the history could get so misleading that the oracle never learns anything.
Now, if we’re not in an online learning process but instead there is a separation between a “training phase” and a “deployment phase where the AI continue to learns with probability ϵ”, then the setup is just “have a model that learns to do the useful stuff in sandbox, and then have the weights (almost) fixed in deployment”
In short, I think the CDT setup without machine learning assumes the problem already solved, that online learning won’t work and is unsafe, which leaves us with a “training then deployment” setup that isn’t really original.
Why do we have to give the oracle a zero reward for the non-erasure episodes? Why not just skip the learning/update step for those episodes?
Why not do a combination of pre-training and online learning, where you do enough during the training phase to get a useful predictor, and then use online learning to deal with subsequent distributional shifts?
Although I guess that probably isn’t really original either. What seems original is that during any episode where learning will take place, don’t let humans (or any other system that might be insecure against the oracle) see the oracle’s output until the episode is over.
The zero reward is in the paper. I agree that skipping would solve the problem. From talking to Stuart, my impression is that he thinks that r=0 would be equivalent to skipping for specifying “no learning”, or would just slow down learning. My disagreement on that I think it can confuse learning to the point of not learning the right thing.
Yes, that should work. My quote saying that online learning “won’t work and is unsafe” is imprecise. I should have said “if epsilon is small enough to be comparable to the probability of shooting an escape message at random, then it is not safe. Also, if we continue sending the wrong r=0 instead of skipping, then it might not learn the correct thing if ϵ is not big enough”.
That’s exactly it!
The escape action being randomly called should not be a problem if it is a text string that is only read if r=1, and is ineffectual otherwise...
The string is read with probability 1-ϵ