such plans are fairly easy and don’t often raise flags that indicate potential failure
Hmm. This is a good point, and I agree that it significantly weakens the analogy.
I was originally going to counter-argue and claim something like “sure total failure forces you to step back far but it doesn’t mean you have to step back literally all the way”. Then I tried to back that up with an example, such as “when I was doing alignment research, I encountered total failure that forced me to abandon large chunks of planning stack, but this never caused me to ‘spill upward’ to questioning whether or not I should be doing alignment research at all”. But uh then I realized that isn’t actually true :/
We want particularly difficult work out of an AI.
On consideration, yup this obviously matters. The thing that causes you to step back from a goal is that goal being a bad way to accomplish its supergoal, aka “too difficult”. Can’t believe I missed this, thanks for pointing it out.
I don’t think this changes the picture too much, besides increasing my estimate of how much optimization we’ll have to do to catch and prevent value-reflection. But a lot of muddy half-ideas came out of this that I’m interested in chewing on.
I’d be curious about why it isn’t changing the picture quite a lot, maybe after you’ve chewed on the ideas. From my perspective it makes the entire non-reflective-AI-via-training pathway not worth pursuing. At least for large scale thinking.
It doesn’t change the picture a lot because the proposal for preventing misaligned goals from arising via this mechanism was to try and get control over when the AI does/doesn’t step back, in order to allow it in the capability-critical cases but disallow it in the dangerous cases. This argument means you’ll have more attempts at dangerous stepping back that you have to catch, but doesn’t break the strategy.
The strategy does break if when we do this blocking, the AI piles on more and more effort trying to unblock it until it either succeeds or is rendered useless for anything else. There being more baseline attempts probably raises the chance of that or some other problem that makes prolonged censorship while maintaining capabilities impossible. But again, just makes it harder, doesn’t break it.
I don’t think you need to have that pile-on property to be useful. Consider MTTR(n), the mean time an LLM takes to realize it’s made a mistake, parameterized by how far up the stack the mistake was. By default you’ll want to have short MTTR for all n. But if you can get your MTTR short enough for small n, you can afford to have MTTR long for large n. Basically, this agent tends to get stuck/rabbit-hole/nerd-snipe but only when the mistake that caused it to get stuck was made a long time ago.
Imagine a capabilities scheme where you train MTTR using synthetic data with an explicit stack and intentionally introduced mistakes. If you’re worried about this destabilization threat model, there’s a pretty clear recommendation: only train for small-n MTTR, treat large-n MTTR as a dangerous capability, and you pay some alignment tax in the form of inefficient MTTR training and occasionally rebooting your agent when it does get stuck in a non dangerous case.
Figured I should get back to this comment but unfortunately the chewing continues. Hoping to get a short post out soon with my all things considered thoughts on whether this direction has any legs
Hmm. This is a good point, and I agree that it significantly weakens the analogy.
I was originally going to counter-argue and claim something like “sure total failure forces you to step back far but it doesn’t mean you have to step back literally all the way”. Then I tried to back that up with an example, such as “when I was doing alignment research, I encountered total failure that forced me to abandon large chunks of planning stack, but this never caused me to ‘spill upward’ to questioning whether or not I should be doing alignment research at all”. But uh then I realized that isn’t actually true :/
On consideration, yup this obviously matters. The thing that causes you to step back from a goal is that goal being a bad way to accomplish its supergoal, aka “too difficult”. Can’t believe I missed this, thanks for pointing it out.
I don’t think this changes the picture too much, besides increasing my estimate of how much optimization we’ll have to do to catch and prevent value-reflection. But a lot of muddy half-ideas came out of this that I’m interested in chewing on.
I’d be curious about why it isn’t changing the picture quite a lot, maybe after you’ve chewed on the ideas. From my perspective it makes the entire non-reflective-AI-via-training pathway not worth pursuing. At least for large scale thinking.
It doesn’t change the picture a lot because the proposal for preventing misaligned goals from arising via this mechanism was to try and get control over when the AI does/doesn’t step back, in order to allow it in the capability-critical cases but disallow it in the dangerous cases. This argument means you’ll have more attempts at dangerous stepping back that you have to catch, but doesn’t break the strategy.
The strategy does break if when we do this blocking, the AI piles on more and more effort trying to unblock it until it either succeeds or is rendered useless for anything else. There being more baseline attempts probably raises the chance of that or some other problem that makes prolonged censorship while maintaining capabilities impossible. But again, just makes it harder, doesn’t break it.
I don’t think you need to have that pile-on property to be useful. Consider MTTR(n), the mean time an LLM takes to realize it’s made a mistake, parameterized by how far up the stack the mistake was. By default you’ll want to have short MTTR for all n. But if you can get your MTTR short enough for small n, you can afford to have MTTR long for large n. Basically, this agent tends to get stuck/rabbit-hole/nerd-snipe but only when the mistake that caused it to get stuck was made a long time ago.
Imagine a capabilities scheme where you train MTTR using synthetic data with an explicit stack and intentionally introduced mistakes. If you’re worried about this destabilization threat model, there’s a pretty clear recommendation: only train for small-n MTTR, treat large-n MTTR as a dangerous capability, and you pay some alignment tax in the form of inefficient MTTR training and occasionally rebooting your agent when it does get stuck in a non dangerous case.
Figured I should get back to this comment but unfortunately the chewing continues. Hoping to get a short post out soon with my all things considered thoughts on whether this direction has any legs