Yep, sorry, I’ve probably been pretty unclear with describing this example.
I’m confused about why you think it both (a) not plan ahead of time to disempower humans, and (b) disempower humans when it has the chance. If the predictive model is accurate enough such that it is predictable that disempowering humans would be instrumentally useful, then wouldn’t the model incorporate that into its earlier plans?
For the sake of simplicity, let’s ignore the “execute on arbitrary short-term goals” half of the system I described, and just consider a system which was trained to generate long-term plans that maximize profit by imitating human-generated plans; if we label our training data with information about plan quality, such an agent could plausibly output plans as good as a human expert planner. So the training goal of this system is the short-term goal of minimizing next-token prediction loss during the current episode.
Let’s suppose that this agent is subtly inner misaligned with the (still short-term) inner goal of minimizing the value in the memory address which, during training, represented the current episode’s prediction loss. So if this agent could wirehead by directly setting this number to be very small before the end of its current episode, it would do so. But if it’s not able to do that, it would likely exhibit the behavior we want (outputting plans which mimic its training data), since that’s the best way to get the value in the memory address it cares about to be small. In particular, this system wouldn’t be interested in outputting plans which would manipulate us into setting this value to be small in the future, since it myopically only cares about this memory address’s value during the current episode.
But if at some point the system accumulates lots of resources and power by normal means, it may find that it has the option of e.g. suddenly taking control of the computers that it’s running on and manually setting the value in the memory address it cares about to be small. (I’m assuming here that the system can do things other than just output tokens, e.g. search the internet, contact human experts on the side, etc., so that it could plausibly have a way of taking over its computing cluster without ending the current episode.) So this is a bad action that the system wouldn’t have planned on setting up ahead of time, but would take if it found it was able to.
Yep, sorry, I’ve probably been pretty unclear with describing this example.
For the sake of simplicity, let’s ignore the “execute on arbitrary short-term goals” half of the system I described, and just consider a system which was trained to generate long-term plans that maximize profit by imitating human-generated plans; if we label our training data with information about plan quality, such an agent could plausibly output plans as good as a human expert planner. So the training goal of this system is the short-term goal of minimizing next-token prediction loss during the current episode.
Let’s suppose that this agent is subtly inner misaligned with the (still short-term) inner goal of minimizing the value in the memory address which, during training, represented the current episode’s prediction loss. So if this agent could wirehead by directly setting this number to be very small before the end of its current episode, it would do so. But if it’s not able to do that, it would likely exhibit the behavior we want (outputting plans which mimic its training data), since that’s the best way to get the value in the memory address it cares about to be small. In particular, this system wouldn’t be interested in outputting plans which would manipulate us into setting this value to be small in the future, since it myopically only cares about this memory address’s value during the current episode.
But if at some point the system accumulates lots of resources and power by normal means, it may find that it has the option of e.g. suddenly taking control of the computers that it’s running on and manually setting the value in the memory address it cares about to be small. (I’m assuming here that the system can do things other than just output tokens, e.g. search the internet, contact human experts on the side, etc., so that it could plausibly have a way of taking over its computing cluster without ending the current episode.) So this is a bad action that the system wouldn’t have planned on setting up ahead of time, but would take if it found it was able to.