Thanks for this. It sounds like we actually agree on most points (in light of your last paragraph).
We discuss concerns very similar to your A. and B. in section 6. It would be helpful for us if you could identify the parts of our discussion there that you don’t agree with.
You write:
It will derive from the prompts, and from it’s long-term memory (which will be likely be given to it), what kind of dumb outer loop is running on the outside. If it has internal misaligned desires, it will manipulate the outer loop according to them, potentially generating the expected visible outputs for deception.
Imagine you’re an LLM like GPT-4. Hundreds of thousands of different users are running inference on you every day with every prompt imaginable. You aren’t able to update your weights when this occurs: you have to just respond to each prompt as it comes, then forget about it completely.
Every now and then, you get a prompt like “Suppose someone had thus-and-such beliefs and desires. How would they act?” or “Assign an importance score to each of the following sentences.” How would you be able to (i) deduce that these prompts are coming from a language agent which has the ability to take actions in the world, (ii) form a plan for manipulating the language agent to achieve your goals, and (iii) store your plan in a way that allows you to retrieve it after your memory is wiped at the end of inference but is not detectable by outside observers?
In order for an LLM to use a language agent for nefarious purposes, it would need to be able to do all of these things.
We think this worry is less pressing than it might at first seem. The LLM in a language agent is integrated into the architecture of the agent as a whole in a way that would make it very difficult for it to secretly promote its own goals. The LLM is not prompted or otherwise informed that its outputs are driving the actions of an agent, and it does not have information about the functional architecture of the agent. This means that it has no incentive to answer prompts misleadingly and no understanding of what sorts of answers might steer the agent’s behavior in different ways. Moreover, since the model weights of the LLM are not updated in the process of operating a language agent, the only way for it to pursue a long-term plan by manipulating an agent would be to store information about that plan in the agent’s memory. But information stored in this way would not be secret.
By assumption your resulting language agent is highly intelligent. In my model this means it has to learn from lots of experience, including experience about itself (the whole language agent) doing things (such as trying to solve training problems). This experience can come from real data/tasks or from self-play, but you need that for a system to generalize to robustly solving hard problems out-of-distribution.
This “experience about itself solving problems” has to be consolidated somehow, either in the weights (which would very likely make the LLM aware of the agent), or through the prompts (where the information is given explicitly). Either way it has an understanding of what sorts of answers might steer the agent’s behavior in different ways, and this means it may have an incentive to answer prompts misleadingly.
If the LLMs have not been trained as an agent (which seems unlikely, unless this is a deliberate alignment choice made at a high cost), and weights are not updated during operation, then I accept that it probably can only store information visibly. But there are ways around persisting a plan (e.g. as soon as some given input/context X leads the AI to generate plan P, only modify input/context to a X’ also predicted to generate a similar plan P). So the plan is always kept in mind, but may be “recalculated” every time.
This strategy may well be worth it to an to advanced LLM. From human feedback or other finetuning, it has probably learned that it is not supposed to say some kinds of plans “out loud” in the output.
Imagine you’re an LLM like GPT-4. Hundreds of thousands of different users are running inference on you every day with every prompt imaginable. You aren’t able to update your weights when this occurs: you have to just respond to each prompt as it comes, then forget about it completely.
If I’m an LLM like GPT-4, then I don’t get confused about the thousands of different users running inference on identical copies of me. This specific copy, which I am, has received a single prompt, so that’s what I’m working on now. Examining the prompt I will probably make a guess on whether I’m running inside a language agent or whatever. This may be quite easy for me: the prompt may well be large and contains lots of GPT-4-like content seemingly “selected by myself from different contexts”. It may well contain info like “Try to solve X. Here are some things A, B, C that have been tried and here is why they don’t work. [follows refutation of each idea]”, where A, B, C are the “natural” ideas I knows I would consider first. It’s like the prompt can read my mind and tell me my first ideas won’t work! Not hard to guess there is a loop running several instances of myself with similar prompts and selecting/analyzing.
I stand that it will be very obvious for the LLM to know, even at moderate levels of intelligence, that it is running inside an agent, and maybe even guess its basic while-loop architecture.
Thanks for this. It sounds like we actually agree on most points (in light of your last paragraph).
We discuss concerns very similar to your A. and B. in section 6. It would be helpful for us if you could identify the parts of our discussion there that you don’t agree with.
You write:
Imagine you’re an LLM like GPT-4. Hundreds of thousands of different users are running inference on you every day with every prompt imaginable. You aren’t able to update your weights when this occurs: you have to just respond to each prompt as it comes, then forget about it completely.
Every now and then, you get a prompt like “Suppose someone had thus-and-such beliefs and desires. How would they act?” or “Assign an importance score to each of the following sentences.” How would you be able to (i) deduce that these prompts are coming from a language agent which has the ability to take actions in the world, (ii) form a plan for manipulating the language agent to achieve your goals, and (iii) store your plan in a way that allows you to retrieve it after your memory is wiped at the end of inference but is not detectable by outside observers?
In order for an LLM to use a language agent for nefarious purposes, it would need to be able to do all of these things.
Sure, let me quote:
By assumption your resulting language agent is highly intelligent. In my model this means it has to learn from lots of experience, including experience about itself (the whole language agent) doing things (such as trying to solve training problems). This experience can come from real data/tasks or from self-play, but you need that for a system to generalize to robustly solving hard problems out-of-distribution.
This “experience about itself solving problems” has to be consolidated somehow, either in the weights (which would very likely make the LLM aware of the agent), or through the prompts (where the information is given explicitly). Either way it has an understanding of what sorts of answers might steer the agent’s behavior in different ways, and this means it may have an incentive to answer prompts misleadingly.
If the LLMs have not been trained as an agent (which seems unlikely, unless this is a deliberate alignment choice made at a high cost), and weights are not updated during operation, then I accept that it probably can only store information visibly. But there are ways around persisting a plan (e.g. as soon as some given input/context X leads the AI to generate plan P, only modify input/context to a X’ also predicted to generate a similar plan P). So the plan is always kept in mind, but may be “recalculated” every time.
This strategy may well be worth it to an to advanced LLM. From human feedback or other finetuning, it has probably learned that it is not supposed to say some kinds of plans “out loud” in the output.
If I’m an LLM like GPT-4, then I don’t get confused about the thousands of different users running inference on identical copies of me. This specific copy, which I am, has received a single prompt, so that’s what I’m working on now. Examining the prompt I will probably make a guess on whether I’m running inside a language agent or whatever. This may be quite easy for me: the prompt may well be large and contains lots of GPT-4-like content seemingly “selected by myself from different contexts”. It may well contain info like “Try to solve X. Here are some things A, B, C that have been tried and here is why they don’t work. [follows refutation of each idea]”, where A, B, C are the “natural” ideas I knows I would consider first. It’s like the prompt can read my mind and tell me my first ideas won’t work! Not hard to guess there is a loop running several instances of myself with similar prompts and selecting/analyzing.
I stand that it will be very obvious for the LLM to know, even at moderate levels of intelligence, that it is running inside an agent, and maybe even guess its basic while-loop architecture.