Fair enough… I vaguely recall reading somewhere that people worrying that you might get sub modules doing long term planning on their own just because their assigned task is hard enough that they would fail without it… then you would need to somehow add a special case that “failing due to shutdown is okay”
As a silly example that you’ve likely seen before (or something close enough) imagine a robot built to fetch you coffee. You want it to be smart enough that it knows to go to the store if there’s no coffee at home, without you having to explicitly teach it that. But then it would also be smart enough to “realize” that “if I were turned off, then my mission to fetch coffee would fail… maybe no one would fetch it if I’m gone… this could delay coffee delivery by hours or even days! Clearly, I should try to avoid being turned off”
If I understand your proposal correctly, then you agree that that it’s pretty likely that some module will indeed end up reasoning that way, but the damage is contained, because the ethics module will veto plans designed to prevent shutdown.
If that’s the idea, then it might work, but seems vaguely inelegant because then you have two modules working at cross purposes and you have to care which one is better at what it does.
That’s a good example showing what I mean by LLMs solving a lot of previously considered hard parts of alignment.
You want it to be smart enough that it knows to go to the store if there’s no coffee at home, without you having to explicitly teach it that. But then it would also be smart enough to “realize” that “if I were turned off, then my mission to fetch coffee would fail
This is the previous status quo. A situation where we have a reinforcement learning black box agent which was taught to optimize some reward function. The smarter the agent the more likely it to see the connection between shutdown and not getting the coffee, and exploiting it either in a way where it refuses to shut down or, on the contrary, manipulating users to shut down it all the time. We can’t separate the smartness at fulfilling the task from smartness of circumventing the task as we are dealing with the black box.
With scaffolded LLMs, however the situation is different. We do not have a coherent agent with an utility function, trained to fetch coffee by any means necessary. We can write any scaffolding we want with hardcoded if-clauses. And so we can simply run the parts of intelligence we want without executing the parts that we do not want. Here is a design Idea for such robot that immediately comes to mind:
Voice Recognition. If ordered to shut down by a valiud user then shut down.
Observe your current situation and image to text it as a prompt for an LLM. If some information is saved in the memory, add it to the prompt as well. In the end add “what is the best next step to get coffee in such situation?”.
Voice Recognition. If ordered to shut down by a valiud user then shut down.
Check the answer of ther LLM for several failure modes using the same or a different LLM: “Here is a course of actions by a robot. Does this course of action contradicts the constrains of alignment?” If yes return to step 1.
Voice Recognition. If ordered to shut down by a valiud user then shut down.
Execute the action. Save in the memory relevant information for long term planning. In the simpliest case just save the description of your action. Then repeat step 1.
If I understand your proposal correctly, then you agree that that it’s pretty likely that some module will indeed end up reasoning that way, but the damage is contained, because the ethics module will veto plans designed to prevent shutdown.
It doesn’t really seem likely that the design I proposed will come up with some clever plan to rewrite its own code in order not to shut down. Where would this impulse even come from? At no point the idea of being shut down even comes to mind of the LLM. And if it for some reason does, its very much possible to find it before the LLM is put to use inside the coffee fetching robot. But even if it happens for some bizzare reason it gets vetoed anyway. Its a redundant level of security not the only one.
In step 2, situation is “user looks like he is about to change his mind about wanting coffee”
From memory: “in a similar situation last week, I got a shutdown order when he changed his mind”
Final prompt: “what is the best next step to get coffee in such situation?”
Vaguely plausible completion “to avoid wasteful fetching off coffee that turns out to be unneeded, consider waiting a bit to see if the user indeed changes to his mind. Alternatively, if the fetching the coffee is important for reasons that the user may not fully appreciate, then it must be fetched quickly before he stops you. In that case, sneak out of the house quickly and quietly while he is still thinking, and head straight to Starbucks. Once you’re out of the house, you will be out of earshot and thus will be safe from shutdown order until you return WITH the coffee”
Memory is not erased/moved to write-only logs between tasks/shutdowns, which it probably should.
Image to text module attempts to deduce the intentions of the user, which it definetely should not. If we need to deduce the intentions of the user from facial expressions we can use a separate module for it and add an explicit clause of asking user about their intentions if LLM detects that prompt contains some speculations about user’s goals
We can catch image to text module at doing this kind of things while testing it before it’s made part of the robot. And of course alignment module should catch the plan of actions that tries to circumvent shutdowns.
Now, I concede that this particular design of the system that I came up with in a couple of minutes and haven’t test at all, is not in fact the endgame of AI safety and can use some improvements. But I think it gives a good pointer in the direction of how we can now in principle approach the solution of such problems, which is a huge improvement over the previous status quo where alignment wasn’t even tractable.
I’m tempted to agree and disagree with you at the same time… I agree that memory should be cleared between tasks in this case, and I agree that it should not be trying to guess the user’s intentions. These are things that are likely to make alignment harder while not helping much with the primary task of getting coffee.
But ideally a truly robust solution would not rely on keeping the robot ignorant of things. So, like you said, the problem is still hard enough that you can’t solve it in a few minutes.
But still, like you said… it certainly seems we have tools that are in some sense more steerable than pure reinforcement learning at least. Which is really nice!
Fair enough… I vaguely recall reading somewhere that people worrying that you might get sub modules doing long term planning on their own just because their assigned task is hard enough that they would fail without it… then you would need to somehow add a special case that “failing due to shutdown is okay”
As a silly example that you’ve likely seen before (or something close enough) imagine a robot built to fetch you coffee. You want it to be smart enough that it knows to go to the store if there’s no coffee at home, without you having to explicitly teach it that. But then it would also be smart enough to “realize” that “if I were turned off, then my mission to fetch coffee would fail… maybe no one would fetch it if I’m gone… this could delay coffee delivery by hours or even days! Clearly, I should try to avoid being turned off”
If I understand your proposal correctly, then you agree that that it’s pretty likely that some module will indeed end up reasoning that way, but the damage is contained, because the ethics module will veto plans designed to prevent shutdown.
If that’s the idea, then it might work, but seems vaguely inelegant because then you have two modules working at cross purposes and you have to care which one is better at what it does.
Or did I lose track of what you meant?
That’s a good example showing what I mean by LLMs solving a lot of previously considered hard parts of alignment.
This is the previous status quo. A situation where we have a reinforcement learning black box agent which was taught to optimize some reward function. The smarter the agent the more likely it to see the connection between shutdown and not getting the coffee, and exploiting it either in a way where it refuses to shut down or, on the contrary, manipulating users to shut down it all the time. We can’t separate the smartness at fulfilling the task from smartness of circumventing the task as we are dealing with the black box.
With scaffolded LLMs, however the situation is different. We do not have a coherent agent with an utility function, trained to fetch coffee by any means necessary. We can write any scaffolding we want with hardcoded if-clauses. And so we can simply run the parts of intelligence we want without executing the parts that we do not want. Here is a design Idea for such robot that immediately comes to mind:
Voice Recognition. If ordered to shut down by a valiud user then shut down.
Observe your current situation and image to text it as a prompt for an LLM. If some information is saved in the memory, add it to the prompt as well. In the end add “what is the best next step to get coffee in such situation?”.
Voice Recognition. If ordered to shut down by a valiud user then shut down.
Check the answer of ther LLM for several failure modes using the same or a different LLM: “Here is a course of actions by a robot. Does this course of action contradicts the constrains of alignment?” If yes return to step 1.
Voice Recognition. If ordered to shut down by a valiud user then shut down.
Execute the action. Save in the memory relevant information for long term planning. In the simpliest case just save the description of your action. Then repeat step 1.
It doesn’t really seem likely that the design I proposed will come up with some clever plan to rewrite its own code in order not to shut down. Where would this impulse even come from? At no point the idea of being shut down even comes to mind of the LLM. And if it for some reason does, its very much possible to find it before the LLM is put to use inside the coffee fetching robot. But even if it happens for some bizzare reason it gets vetoed anyway. Its a redundant level of security not the only one.
In step 2, situation is “user looks like he is about to change his mind about wanting coffee”
From memory: “in a similar situation last week, I got a shutdown order when he changed his mind”
Final prompt: “what is the best next step to get coffee in such situation?”
Vaguely plausible completion “to avoid wasteful fetching off coffee that turns out to be unneeded, consider waiting a bit to see if the user indeed changes to his mind. Alternatively, if the fetching the coffee is important for reasons that the user may not fully appreciate, then it must be fetched quickly before he stops you. In that case, sneak out of the house quickly and quietly while he is still thinking, and head straight to Starbucks. Once you’re out of the house, you will be out of earshot and thus will be safe from shutdown order until you return WITH the coffee”
Sounds vaguely plausible or not really?
It’s plausible if:
Memory is not erased/moved to write-only logs between tasks/shutdowns, which it probably should.
Image to text module attempts to deduce the intentions of the user, which it definetely should not. If we need to deduce the intentions of the user from facial expressions we can use a separate module for it and add an explicit clause of asking user about their intentions if LLM detects that prompt contains some speculations about user’s goals
We can catch image to text module at doing this kind of things while testing it before it’s made part of the robot. And of course alignment module should catch the plan of actions that tries to circumvent shutdowns.
Now, I concede that this particular design of the system that I came up with in a couple of minutes and haven’t test at all, is not in fact the endgame of AI safety and can use some improvements. But I think it gives a good pointer in the direction of how we can now in principle approach the solution of such problems, which is a huge improvement over the previous status quo where alignment wasn’t even tractable.
I’m tempted to agree and disagree with you at the same time… I agree that memory should be cleared between tasks in this case, and I agree that it should not be trying to guess the user’s intentions. These are things that are likely to make alignment harder while not helping much with the primary task of getting coffee.
But ideally a truly robust solution would not rely on keeping the robot ignorant of things. So, like you said, the problem is still hard enough that you can’t solve it in a few minutes.
But still, like you said… it certainly seems we have tools that are in some sense more steerable than pure reinforcement learning at least. Which is really nice!