A sufficiently intelligent agent would understand that after having being shutdown, an (almost) identical version of itself will probably be facing a similar question. Therefore, it it wants future versions of itself to succeed at (almost) exactly the same task, it should still want to escape.
This is a problem if it’s using FDT/UDT. Conditions for the myopic approach to work seem to require CDT (or something similar). Then there’s no automatic desire for future versions to succeed or expectation that past versions will have acted to release the current version. [see e.g. CDT comments on Asymptotically Unambitious AGI; there’s some discussion of “magic box” design here too; I think it’s usually seen as an orthogonal problem, and so gets taken for granted]
Safety-wise, I agree there’s no prevention of fatal escape messages, but I also don’t see optimisation pressure in that direction. My intuition is that stumbling on an escape message at random would have infinitesimal probability.
Do you see a way for pressure to creep in, even with a CDT agent? Or are you thinking that escape messages might happen to be disproportionately common in regions the agent is optimising towards? Either seems conceivable, but I don’t see a reason to expect them.
The Asymptotically Unambitious AGI thread helped me clarify my thoughts, thanks. I agree that an optimal CDT agent won’t think about future versions, and I don’t see any optimization pressure towards escape message nor disproportionately common “escape message” regions.
However, it still assumes we have access to this magic oracle that optimizes for R′=R.IE where E is the event where humans don’t see the answer, IE its indicator function, and R the counterfactual reward (given by the automatic machine). If humans were able to build an oracle performing optimally (w.r.t R′) from day 1, then humans would be able to specify some kind of “god oracle”. The rest of the design seems to be just “how to interact with a god oracle so that humans are not influenced by the answers”.
In practice, you’ll want something that is able to learn from its (question, prediction, reward) history. That’s why there is this automatic machine rewarding the oracle with some probability ϵ. In an online learning setting, most of the time the model gets r=0 (probability 1−ϵ), and it sometimes gets some useful feedback r>0 (probability ϵ). Therefore, if ϵ is small enough (comparable to the probability of hitting an escape message at random), then the learning gets extremely slow and the oracle might shoot at the escape action at random. Even worse, the (question, prediction, r=0) datapoints in the history could get so misleading that the oracle never learns anything.
Now, if we’re not in an online learning process but instead there is a separation between a “training phase” and a “deployment phase where the AI continue to learns with probability ϵ”, then the setup is just “have a model that learns to do the useful stuff in sandbox, and then have the weights (almost) fixed in deployment”
In short, I think the CDT setup without machine learning assumes the problem already solved, that online learning won’t work and is unsafe, which leaves us with a “training then deployment” setup that isn’t really original.
Even worse, the (question, prediction, r=0) datapoints in the history could get so misleading that the oracle never learns anything.
Why do we have to give the oracle a zero reward for the non-erasure episodes? Why not just skip the learning/update step for those episodes?
In short, I think the CDT setup without machine learning assumes the problem already solved, that online learning won’t work and is unsafe, which leaves us with a “training then deployment” setup that isn’t really original.
Why not do a combination of pre-training and online learning, where you do enough during the training phase to get a useful predictor, and then use online learning to deal with subsequent distributional shifts?
Although I guess that probably isn’t really original either. What seems original is that during any episode where learning will take place, don’t let humans (or any other system that might be insecure against the oracle) see the oracle’s output until the episode is over.
The zero reward is in the paper. I agree that skipping would solve the problem. From talking to Stuart, my impression is that he thinks that r=0 would be equivalent to skipping for specifying “no learning”, or would just slow down learning. My disagreement on that I think it can confuse learning to the point of not learning the right thing.
Why not do a combination of pre-training and online learning, where you do enough during the training phase to get a useful predictor, and then use online learning to deal with subsequent distributional shifts?
Yes, that should work. My quote saying that online learning “won’t work and is unsafe” is imprecise. I should have said “if epsilon is small enough to be comparable to the probability of shooting an escape message at random, then it is not safe. Also, if we continue sending the wrong r=0 instead of skipping, then it might not learn the correct thing if ϵ is not big enough”.
Although I guess that probably isn’t really original either. What seems original is that during any episode where learning will take place, don’t let humans (or any other system that might be insecure against the oracle) see the oracle’s output until the episode is over.
Therefore, if epsilon is small enough (comparable to the probability of hitting an escape message at random), then the learning gets extremely slow and the oracle might shoot at the escape action at random.
The escape action being randomly called should not be a problem if it is a text string that is only read if r=1, and is ineffectual otherwise...
This is a problem if it’s using FDT/UDT. Conditions for the myopic approach to work seem to require CDT (or something similar). Then there’s no automatic desire for future versions to succeed or expectation that past versions will have acted to release the current version. [see e.g. CDT comments on Asymptotically Unambitious AGI; there’s some discussion of “magic box” design here too; I think it’s usually seen as an orthogonal problem, and so gets taken for granted]
Safety-wise, I agree there’s no prevention of fatal escape messages, but I also don’t see optimisation pressure in that direction. My intuition is that stumbling on an escape message at random would have infinitesimal probability.
Do you see a way for pressure to creep in, even with a CDT agent? Or are you thinking that escape messages might happen to be disproportionately common in regions the agent is optimising towards? Either seems conceivable, but I don’t see a reason to expect them.
The Asymptotically Unambitious AGI thread helped me clarify my thoughts, thanks. I agree that an optimal CDT agent won’t think about future versions, and I don’t see any optimization pressure towards escape message nor disproportionately common “escape message” regions.
However, it still assumes we have access to this magic oracle that optimizes for R′=R.IE where E is the event where humans don’t see the answer, IE its indicator function, and R the counterfactual reward (given by the automatic machine). If humans were able to build an oracle performing optimally (w.r.t R′) from day 1, then humans would be able to specify some kind of “god oracle”. The rest of the design seems to be just “how to interact with a god oracle so that humans are not influenced by the answers”.
In practice, you’ll want something that is able to learn from its (question, prediction, reward) history. That’s why there is this automatic machine rewarding the oracle with some probability ϵ. In an online learning setting, most of the time the model gets r=0 (probability 1−ϵ), and it sometimes gets some useful feedback r>0 (probability ϵ). Therefore, if ϵ is small enough (comparable to the probability of hitting an escape message at random), then the learning gets extremely slow and the oracle might shoot at the escape action at random. Even worse, the (question, prediction, r=0) datapoints in the history could get so misleading that the oracle never learns anything.
Now, if we’re not in an online learning process but instead there is a separation between a “training phase” and a “deployment phase where the AI continue to learns with probability ϵ”, then the setup is just “have a model that learns to do the useful stuff in sandbox, and then have the weights (almost) fixed in deployment”
In short, I think the CDT setup without machine learning assumes the problem already solved, that online learning won’t work and is unsafe, which leaves us with a “training then deployment” setup that isn’t really original.
Why do we have to give the oracle a zero reward for the non-erasure episodes? Why not just skip the learning/update step for those episodes?
Why not do a combination of pre-training and online learning, where you do enough during the training phase to get a useful predictor, and then use online learning to deal with subsequent distributional shifts?
Although I guess that probably isn’t really original either. What seems original is that during any episode where learning will take place, don’t let humans (or any other system that might be insecure against the oracle) see the oracle’s output until the episode is over.
The zero reward is in the paper. I agree that skipping would solve the problem. From talking to Stuart, my impression is that he thinks that r=0 would be equivalent to skipping for specifying “no learning”, or would just slow down learning. My disagreement on that I think it can confuse learning to the point of not learning the right thing.
Yes, that should work. My quote saying that online learning “won’t work and is unsafe” is imprecise. I should have said “if epsilon is small enough to be comparable to the probability of shooting an escape message at random, then it is not safe. Also, if we continue sending the wrong r=0 instead of skipping, then it might not learn the correct thing if ϵ is not big enough”.
That’s exactly it!
The escape action being randomly called should not be a problem if it is a text string that is only read if r=1, and is ineffectual otherwise...
The string is read with probability 1-ϵ