Re sharp left turn: Maybe I misunderstand the “sharp left turn” term, but I thought this just means a sudden extreme gain in capabilities? If I am correct, then I expect you might get “sharp left turn” with a simulator during training—eg, a user fine-tunes it on one additional dataset, and suddenly FOOOM. (Say, suddenly it can simulate agents that propose takeover plans that would actually work, when previously they failed at this with identical prompting.)
One implication I see is that it if the simulator architecture becomes frequently used, it might be really hard to tell whether a thing is dangerous or not. For example might just behave completely fine with most prompts and catastrophically with some other prompts, and you will never know until you try. (Or unless you do some extra interpretability/other work that doesn’t yet exist.) It would be rather unfortunate if the Vulnerable World Hypothesis was true because of specific LLM prompts :-).
I agree that a sudden gain in capabilities can make a simulated agent undergo a sharp left turn (coming up with more effective takeover plans is a great example). My original question was about whether the simulator itself could undergo a sharp left turn. My current understanding is that a pure simulator would not become misaligned if its capabilities suddenly increase because it remains myopic, so we only have to worry about a sharp left turn for simulated agents rather than the simulator itself. Of course, in practice, language models are often fine-tuned with RL, which creates agentic incentives on the simulator level as well.
You make a good point about the difficulty of identifying dangerous models if the danger is triggered by very specific prompts. I think this may go both ways though, by making it difficult for a simulated agent to execute a chain of dangerous behaviors, which could be interrupted by certain inputs from the user.
the catastrophic risk with some prompts would be the form of bad behavior for many possible AIs, not just ones that are predictive-model-only; any model that is able to have small volumes of activation space that suddenly break an invariant in a way that isn’t detectable in tractable big-O due to a messy latent space that can’t be jumped around by a prover. which is to say, currently most neural networks are far too messy for provers, but the biggest ones that aren’t are pretty big and we shouldn’t give up too early.
in general, a sharp left turn is where there’s a very sharp difference between the test set and the future inputs set in a way the model is effectively guaranteed to be smart enough to detect almost perfectly due to strong generalization. for example, if you’re modeling the behavior of a physics simulation, and the simulation has a particular kind of chaotic behavior that is simulated slightly incorrectly; similar enough to make almost any macroscopic measurement come out the same, but differently enough that a few layers of texture detector can trivially pick it up; obvious to the human eye as well, but only if you’re looking for it and have real vs fake to compare. This is a realistic threat, as sims almost always get fluids just slightly wrong; the learned model would come to have a very strong intuitive sense of the behavior of the simulator’s fluids, and would be constantly slightly surprised by the real world. a strong enough model would be able to recognize the increased information content due to very slight distribution mismatch in the visual shapes of the fluids, and if it was smart enough to aggregate this distribution mismatch and propagate it into tokenized, symbolically-relational facts, it could extrapolate globally that it’s finally actually acting in the real physical world.
I think, due to natural abstraction, that we can ensure there are no adversarial examples due to slightly different real fluid dynamics than a simulator.
This actually implies a fairly straightforward experiment… hmmm… now if only I was more productive...
Re sharp left turn: Maybe I misunderstand the “sharp left turn” term, but I thought this just means a sudden extreme gain in capabilities? If I am correct, then I expect you might get “sharp left turn” with a simulator during training—eg, a user fine-tunes it on one additional dataset, and suddenly FOOOM. (Say, suddenly it can simulate agents that propose takeover plans that would actually work, when previously they failed at this with identical prompting.)
One implication I see is that it if the simulator architecture becomes frequently used, it might be really hard to tell whether a thing is dangerous or not. For example might just behave completely fine with most prompts and catastrophically with some other prompts, and you will never know until you try. (Or unless you do some extra interpretability/other work that doesn’t yet exist.) It would be rather unfortunate if the Vulnerable World Hypothesis was true because of specific LLM prompts :-).
I agree that a sudden gain in capabilities can make a simulated agent undergo a sharp left turn (coming up with more effective takeover plans is a great example). My original question was about whether the simulator itself could undergo a sharp left turn. My current understanding is that a pure simulator would not become misaligned if its capabilities suddenly increase because it remains myopic, so we only have to worry about a sharp left turn for simulated agents rather than the simulator itself. Of course, in practice, language models are often fine-tuned with RL, which creates agentic incentives on the simulator level as well.
You make a good point about the difficulty of identifying dangerous models if the danger is triggered by very specific prompts. I think this may go both ways though, by making it difficult for a simulated agent to execute a chain of dangerous behaviors, which could be interrupted by certain inputs from the user.
the catastrophic risk with some prompts would be the form of bad behavior for many possible AIs, not just ones that are predictive-model-only; any model that is able to have small volumes of activation space that suddenly break an invariant in a way that isn’t detectable in tractable big-O due to a messy latent space that can’t be jumped around by a prover. which is to say, currently most neural networks are far too messy for provers, but the biggest ones that aren’t are pretty big and we shouldn’t give up too early.
in general, a sharp left turn is where there’s a very sharp difference between the test set and the future inputs set in a way the model is effectively guaranteed to be smart enough to detect almost perfectly due to strong generalization. for example, if you’re modeling the behavior of a physics simulation, and the simulation has a particular kind of chaotic behavior that is simulated slightly incorrectly; similar enough to make almost any macroscopic measurement come out the same, but differently enough that a few layers of texture detector can trivially pick it up; obvious to the human eye as well, but only if you’re looking for it and have real vs fake to compare. This is a realistic threat, as sims almost always get fluids just slightly wrong; the learned model would come to have a very strong intuitive sense of the behavior of the simulator’s fluids, and would be constantly slightly surprised by the real world. a strong enough model would be able to recognize the increased information content due to very slight distribution mismatch in the visual shapes of the fluids, and if it was smart enough to aggregate this distribution mismatch and propagate it into tokenized, symbolically-relational facts, it could extrapolate globally that it’s finally actually acting in the real physical world.
I think, due to natural abstraction, that we can ensure there are no adversarial examples due to slightly different real fluid dynamics than a simulator.
This actually implies a fairly straightforward experiment… hmmm… now if only I was more productive...