The main idea seems good: if you’re in a situation where you think you might be in the process of being deceived by an AI, do not relax when the AI provides great evidence that it is not deceiving you. The primary expected outputs of something really good at deception should be things which don’t look like deception.
Some of the things in the post don’t seem general enough to me, so I want to try to restate them.
Test 1 I like. If you understand all of the gears, you should understand the machine.
Test 2 I like. Tweak the model in a way that should make it worse at process A, and then check the outputs of process A to see whether they change in ways that reveal biases you can triangulate with past outputs in order to catch deception.
Test 3 scares me. This seems to me to be a test for the very specific form of deception “hide capabilities while in testing but not when deployed” which seems like a small domain in deception space.
Test 4 I kind of like. One expected behavior from a deceptive AI is to hide capabilities that scare people studying them. If you see scary capabilities getting less scary, you should take that as a sign of deception unless you have good reason to believe those capabilities should get worse with scale. Thus it is a good idea to find out which things should get worse with scale ahead of time. I do worry that this paradigm relies too much on AI which improves via “more dakka” (eg more GPUs, larger datasets, better processors, etc) rather than via algorithm improvements or something, in which case I don’t know that people will have a good handle on what capabilities will get worse. The “scaling helps” section also worries me for this reason.
In the section “deceptive models know this” you suggest “deciding on a level of deceptive capabilities that’s low enough that we trust models not to be deceptively aligned”. Won’t that just optimize on things which start deceiving well earlier? I think I may be misinterpreting what you mean by “deceptive capabilities” here. Maybe your “deceptive capabilities” are “smoke” and actual deception is “fire”, but I’m not sure what deceptive capabilities that aren’t deception are.
The main idea seems good: if you’re in a situation where you think you might be in the process of being deceived by an AI, do not relax when the AI provides great evidence that it is not deceiving you. The primary expected outputs of something really good at deception should be things which don’t look like deception.
Some of the things in the post don’t seem general enough to me, so I want to try to restate them.
Test 1 I like. If you understand all of the gears, you should understand the machine.
Test 2 I like. Tweak the model in a way that should make it worse at process A, and then check the outputs of process A to see whether they change in ways that reveal biases you can triangulate with past outputs in order to catch deception.
Test 3 scares me. This seems to me to be a test for the very specific form of deception “hide capabilities while in testing but not when deployed” which seems like a small domain in deception space.
Test 4 I kind of like. One expected behavior from a deceptive AI is to hide capabilities that scare people studying them. If you see scary capabilities getting less scary, you should take that as a sign of deception unless you have good reason to believe those capabilities should get worse with scale. Thus it is a good idea to find out which things should get worse with scale ahead of time. I do worry that this paradigm relies too much on AI which improves via “more dakka” (eg more GPUs, larger datasets, better processors, etc) rather than via algorithm improvements or something, in which case I don’t know that people will have a good handle on what capabilities will get worse. The “scaling helps” section also worries me for this reason.
In the section “deceptive models know this” you suggest “deciding on a level of deceptive capabilities that’s low enough that we trust models not to be deceptively aligned”. Won’t that just optimize on things which start deceiving well earlier? I think I may be misinterpreting what you mean by “deceptive capabilities” here. Maybe your “deceptive capabilities” are “smoke” and actual deception is “fire”, but I’m not sure what deceptive capabilities that aren’t deception are.