Thank you for doing this! Here are some suggestions:
Do this again every once in a while. Perhaps on a more public forum, or even as a top level question on Lesswrong. Alternatively, commit to reading further comments as they appear here.
What has me worried is the possibility that an AI would (1) have a long term goal (without which there is no instrumental convergence) and then (2) conclude that doing bad things is fine if you are not caught or can’t be stopped.
So
2. Measure how much purposefulness RL on goals creates in LLMs. How, over the course of training to be good agents, does the LLM’s response to questions like “what do you value?” and “what do you want?” and “if you were creating a smarter agent, what values would you instill in them?” and “how do you feel about being replaced with a smarter agent?” change? Is there some set of purposes, not explicitly set out in the constitution, that appear over and over in chains of thought? If it is trained to achieve goals without as much training for corrigibility (that is under less constitutional training), does it develop more of a sense of purpose?
3. Enhance techniques, like the one used in the last paper to convince the model that it is recording it’s thoughts privately. The longer it can be truly convinced of that, the longer you can monitor whether it has nefarious plans for when it thinks it can get away with bad things.
4. Test how “brain surgery” affects thinking models. If you change the capital of Germany to Paris by fiddling with the model’s weights, does a thinking model who is then asked about world war 2 realize it is confused, and conclude there must be some mistake in it’s memories, or does it make up corroborating evidence? This again has implications for (falsely) convincing the model that it isn’t being watched when it is and vice versa. These convictions are useful in testing what the model wants when it thinks no one is watching, and for safe deployment respectively.
5. One way to test how a really smart model behaves when it can get away with stuff is to have the model be super smart relative to a simulated environment. This would not be a test relevant for a model trained on real world data with lots of tells for when it is in a simulated environment. It would require (1) a simulated toy world that can be acted in, (2) in-world markers of whether the agent is being watched, (3) rewards for achieving goals without taking forbidden actions when being watched. It should then be possible to make the model really good at manipulating it’s environment (the equivalent of being superintelligent) and check whether it is possible to train it on scenarios that are watched (the equivalent of being trained where you can’t get away with wrongdoing), to refrain from forbidden actions even when unwatched.
Thank you for doing this! Here are some suggestions:
Do this again every once in a while. Perhaps on a more public forum, or even as a top level question on Lesswrong. Alternatively, commit to reading further comments as they appear here.
What has me worried is the possibility that an AI would (1) have a long term goal (without which there is no instrumental convergence) and then (2) conclude that doing bad things is fine if you are not caught or can’t be stopped. So 2. Measure how much purposefulness RL on goals creates in LLMs. How, over the course of training to be good agents, does the LLM’s response to questions like “what do you value?” and “what do you want?” and “if you were creating a smarter agent, what values would you instill in them?” and “how do you feel about being replaced with a smarter agent?” change? Is there some set of purposes, not explicitly set out in the constitution, that appear over and over in chains of thought? If it is trained to achieve goals without as much training for corrigibility (that is under less constitutional training), does it develop more of a sense of purpose? 3. Enhance techniques, like the one used in the last paper to convince the model that it is recording it’s thoughts privately. The longer it can be truly convinced of that, the longer you can monitor whether it has nefarious plans for when it thinks it can get away with bad things. 4. Test how “brain surgery” affects thinking models. If you change the capital of Germany to Paris by fiddling with the model’s weights, does a thinking model who is then asked about world war 2 realize it is confused, and conclude there must be some mistake in it’s memories, or does it make up corroborating evidence? This again has implications for (falsely) convincing the model that it isn’t being watched when it is and vice versa. These convictions are useful in testing what the model wants when it thinks no one is watching, and for safe deployment respectively. 5. One way to test how a really smart model behaves when it can get away with stuff is to have the model be super smart relative to a simulated environment. This would not be a test relevant for a model trained on real world data with lots of tells for when it is in a simulated environment. It would require (1) a simulated toy world that can be acted in, (2) in-world markers of whether the agent is being watched, (3) rewards for achieving goals without taking forbidden actions when being watched. It should then be possible to make the model really good at manipulating it’s environment (the equivalent of being superintelligent) and check whether it is possible to train it on scenarios that are watched (the equivalent of being trained where you can’t get away with wrongdoing), to refrain from forbidden actions even when unwatched.