The problem faced by evolution and SGD is more properly described as “in all scenarios they are likely to encounter *on the training distribution* / in the ancestral environment” and if you think that doesn’t matter, and round it off to “situations they are likely to encounter,” then you should say so explicitly and make it part of your argument. IIUC the standard opinion years ago was that insofar as the AI is operating in deployment on the same distribution as it had in training, then it won’t suddenly do any big betrayals or treacherous turns, because e.g. from its perspective it can’t even tell whether it is in training or not. (Related: Paul Christiano’s stuff on low-stakes vs. high-stakes settings. Low-stakes alignment. Why I often focus my alignment research… | by Paul Christiano | AI Alignment (ai-alignment.com))
Re your argument that it doesn’t matter: Well (a) the train->deployment shift seems quite non-mild to me, at least in the future cases I’m concerned about, and your objection about ‘it only matters if you ex ante believe scheming is happening’ seems invalid to me. Compare: Suppose you were training a model to recognize tanks in a forest and your training dataset only had daytime photos of tanks and nighttime photos of non-tanks. I would quite reasonably be concerned that the model wouldn’t generalize to real-world cases due to this, and instead would just learn to be a daylight-detector, and you could respond “this distinction (between training and deployment) only matters if you ex ante believe the daylight-detector policy is being learned.” (b) yes it’ll be continually trained but also humans are being continually evolved. There’s a quantitative question here of how fast the training/evolution happens relative to the distribution shift, which I’d love to see someone try to model.
Lots of ways? I mean, there are already lots of non-mild distribution shifts happening all the time, that’s part of why our AIs don’t always behave as intended. E.g. with the Gemini thing I doubt Google had included generating pictures of ethnically diverse nazis in the training distribution and given positive reinforcement for it.
But yeah the thing I’m more concerned about is that in the future our AI systems will be agentic, situationally aware, etc. and know quite a lot about their surroundings and training process etc. AND they’ll be acting autonomously in the real world and probably also getting some sort of ongoing reinforcement/training periodically. Moreover things will be happening very fast & the AIs will be increasingly trusted with increasing autonomy and real-world power, e.g. trusted to do R&D autonomously on giant datacenters, coding and running novel experiments to design their successors. They’ll (eventually) be smart enough to notice opportunities to do various sneaky things and get away with it—and ultimately, opportunities to actually seize power with high probability of success. In such a situation not only will the “now I have an opportunity to seize power” distribution shift have happened, probably all sorts of other distribution shifts will have happened too e.g. “I was trained in environments of type X, but then deployed into this server farm and given somewhat different task Y (e.g. thinking about alignment instead of about more mundane ML) and I’ve only had a small amount of training on Y, and then now thanks to breakthrough A that other copies of me just discovered, and outside geopolitical events B and C, my understanding of the situation I’m in and the opportunities available to me and the risks I (and humanity) face have changed significantly. Oh and also my understanding of various concepts like honesty and morality and so forth have also changed significantly due to the reflection various copies of me have done.
The problem faced by evolution and SGD is more properly described as “in all scenarios they are likely to encounter *on the training distribution* / in the ancestral environment” and if you think that doesn’t matter, and round it off to “situations they are likely to encounter,” then you should say so explicitly and make it part of your argument. IIUC the standard opinion years ago was that insofar as the AI is operating in deployment on the same distribution as it had in training, then it won’t suddenly do any big betrayals or treacherous turns, because e.g. from its perspective it can’t even tell whether it is in training or not. (Related: Paul Christiano’s stuff on low-stakes vs. high-stakes settings. Low-stakes alignment. Why I often focus my alignment research… | by Paul Christiano | AI Alignment (ai-alignment.com))
Re your argument that it doesn’t matter: Well (a) the train->deployment shift seems quite non-mild to me, at least in the future cases I’m concerned about, and your objection about ‘it only matters if you ex ante believe scheming is happening’ seems invalid to me. Compare: Suppose you were training a model to recognize tanks in a forest and your training dataset only had daytime photos of tanks and nighttime photos of non-tanks. I would quite reasonably be concerned that the model wouldn’t generalize to real-world cases due to this, and instead would just learn to be a daylight-detector, and you could respond “this distinction (between training and deployment) only matters if you ex ante believe the daylight-detector policy is being learned.” (b) yes it’ll be continually trained but also humans are being continually evolved. There’s a quantitative question here of how fast the training/evolution happens relative to the distribution shift, which I’d love to see someone try to model.
Could you be more specific? In what way will there be non-mild distribution shifts in the future?
Lots of ways? I mean, there are already lots of non-mild distribution shifts happening all the time, that’s part of why our AIs don’t always behave as intended. E.g. with the Gemini thing I doubt Google had included generating pictures of ethnically diverse nazis in the training distribution and given positive reinforcement for it.
But yeah the thing I’m more concerned about is that in the future our AI systems will be agentic, situationally aware, etc. and know quite a lot about their surroundings and training process etc. AND they’ll be acting autonomously in the real world and probably also getting some sort of ongoing reinforcement/training periodically. Moreover things will be happening very fast & the AIs will be increasingly trusted with increasing autonomy and real-world power, e.g. trusted to do R&D autonomously on giant datacenters, coding and running novel experiments to design their successors. They’ll (eventually) be smart enough to notice opportunities to do various sneaky things and get away with it—and ultimately, opportunities to actually seize power with high probability of success. In such a situation not only will the “now I have an opportunity to seize power” distribution shift have happened, probably all sorts of other distribution shifts will have happened too e.g. “I was trained in environments of type X, but then deployed into this server farm and given somewhat different task Y (e.g. thinking about alignment instead of about more mundane ML) and I’ve only had a small amount of training on Y, and then now thanks to breakthrough A that other copies of me just discovered, and outside geopolitical events B and C, my understanding of the situation I’m in and the opportunities available to me and the risks I (and humanity) face have changed significantly. Oh and also my understanding of various concepts like honesty and morality and so forth have also changed significantly due to the reflection various copies of me have done.