Lots of ways? I mean, there are already lots of non-mild distribution shifts happening all the time, that’s part of why our AIs don’t always behave as intended. E.g. with the Gemini thing I doubt Google had included generating pictures of ethnically diverse nazis in the training distribution and given positive reinforcement for it.
But yeah the thing I’m more concerned about is that in the future our AI systems will be agentic, situationally aware, etc. and know quite a lot about their surroundings and training process etc. AND they’ll be acting autonomously in the real world and probably also getting some sort of ongoing reinforcement/training periodically. Moreover things will be happening very fast & the AIs will be increasingly trusted with increasing autonomy and real-world power, e.g. trusted to do R&D autonomously on giant datacenters, coding and running novel experiments to design their successors. They’ll (eventually) be smart enough to notice opportunities to do various sneaky things and get away with it—and ultimately, opportunities to actually seize power with high probability of success. In such a situation not only will the “now I have an opportunity to seize power” distribution shift have happened, probably all sorts of other distribution shifts will have happened too e.g. “I was trained in environments of type X, but then deployed into this server farm and given somewhat different task Y (e.g. thinking about alignment instead of about more mundane ML) and I’ve only had a small amount of training on Y, and then now thanks to breakthrough A that other copies of me just discovered, and outside geopolitical events B and C, my understanding of the situation I’m in and the opportunities available to me and the risks I (and humanity) face have changed significantly. Oh and also my understanding of various concepts like honesty and morality and so forth have also changed significantly due to the reflection various copies of me have done.
Could you be more specific? In what way will there be non-mild distribution shifts in the future?
Lots of ways? I mean, there are already lots of non-mild distribution shifts happening all the time, that’s part of why our AIs don’t always behave as intended. E.g. with the Gemini thing I doubt Google had included generating pictures of ethnically diverse nazis in the training distribution and given positive reinforcement for it.
But yeah the thing I’m more concerned about is that in the future our AI systems will be agentic, situationally aware, etc. and know quite a lot about their surroundings and training process etc. AND they’ll be acting autonomously in the real world and probably also getting some sort of ongoing reinforcement/training periodically. Moreover things will be happening very fast & the AIs will be increasingly trusted with increasing autonomy and real-world power, e.g. trusted to do R&D autonomously on giant datacenters, coding and running novel experiments to design their successors. They’ll (eventually) be smart enough to notice opportunities to do various sneaky things and get away with it—and ultimately, opportunities to actually seize power with high probability of success. In such a situation not only will the “now I have an opportunity to seize power” distribution shift have happened, probably all sorts of other distribution shifts will have happened too e.g. “I was trained in environments of type X, but then deployed into this server farm and given somewhat different task Y (e.g. thinking about alignment instead of about more mundane ML) and I’ve only had a small amount of training on Y, and then now thanks to breakthrough A that other copies of me just discovered, and outside geopolitical events B and C, my understanding of the situation I’m in and the opportunities available to me and the risks I (and humanity) face have changed significantly. Oh and also my understanding of various concepts like honesty and morality and so forth have also changed significantly due to the reflection various copies of me have done.