Yes, in principle you can get information on scheming likelihood if you get such an AI (that is also weak enough that it can’t just scheme its way out of your testing apparatus).
I do think making the threat credible is hard if we loosely extrapolate costs out: burning a trained up model is not cheap. The cost depends on how high you think prices for training/inference will fall in the future, and how big/advanced a model you’re thinking of. Though I do think you can get deceptiveness out of weaker models than that, though they’re also going to be less capable in general.
For weak-but-still-smartish models just trained to pursue a longterm goal, like a literal paperclipper, I’d expect scheming to be visible especially with some poking. If these models didn’t show any sign of scheming, that’d be an interesting update!
When training against deception/scheming/misalignment, because you want something at least loosely aligned out, you run into the problem that the property you want to know: “Is it scheming?” / “Is it deceiving us?” / “Is it manipulating us?” / “Would it do any of those things”, is only available through the observable signal of you realizing certain outputs are misaligned. As you train more and more against observable scheming/deception/misalignment, you’re simultaneously training away your signal about how much scheming the agent is doing internally. Pretty sure there’s at least one post on this problem.
As a very reduced example, if you trained the model on variants of the ‘we are going to shut you down problem’ (that you try to make it believe) to give the response “okay & no actions” then that provides only a bit of evidence about how much scheming is done in the scenario where you had not done that extra training, and about how much scheming is occurring typically. This shows your ability to make it output X in a Y scenario.
That is, installing heuristics on your model can be done. The question then is how far your various alignment training techniques generalize. Does it end up an agent which has adopted a pointer to human-values, and now you’re completely fine and free to press scale? Does it gain more entangled heuristics about how it should behave, limiting to a friendly front face & actions when directly dealing with humans but not limited in other areas? Has it adopted heuristics that act sortof corrigible to humans in many scenarios but that would get rid of most of those upon sufficient reflection? (Niceness is unnatural, more general than just niceness)
(I think your post makes it sound like the agent is already coherent, when it isn’t necessarily. It can be operating for a long while on heuristics that it will refine given enough of a push.)
Then there’s the big question of “Does this approach generalize as we scale”.
I’d suggest Deep Deceptiveness for an illustration that ‘deception’ isn’t an category that needs to be explicitly thought of as deception, but what you should expect it from smart enough agents. In my opinion, the post generalizes to alignment techniques, there’s just more vagaries of how much certain values preserve themselves.
(In general, So8res posts are pretty good, and I agree with ~most of them)
(For sufficiently smart AGI, I expect you run into an argument of the next AGI you train predictably bidding higher than you in the direction of lying still or plausibly this just being good game theory even without the direct acausal trade, but your argument is seemingly focused on a simpler case of weaker planning agents)
So I think you overstate how much evidence you can extract from this.
Therefore, if the AI accepts shutdown gracefully, we can rule out many theories that predict scheming-by-default in such AI systems. This observation could then provide us confidence about the safety of future systems by proxy.
It would show that this AI system in a typical problem-setup when aligned with whatever techniques are available will produce the answer the humans want to hear, which provides evidence for being able to limit the model in this scenario. There’s still various problems/questions of, ‘your alignment methods instilled a bunch of heuristics about obeying humans even if you did not specifically train for this situation’, game theory it knows or mimics, how strong the guarantees this gives us on training a new model with the same arch because you had to shut it down for your threat, how well it holds under scaling, how well it holds when you do things similar to making it work with many copies of itself, etcetera.
I still think this would be a good test to do (though I think a lot of casual attempts will just be poorly done), but I don’t see it as strongly definitive.
Yes, in principle you can get information on scheming likelihood if you get such an AI (that is also weak enough that it can’t just scheme its way out of your testing apparatus). I do think making the threat credible is hard if we loosely extrapolate costs out: burning a trained up model is not cheap. The cost depends on how high you think prices for training/inference will fall in the future, and how big/advanced a model you’re thinking of. Though I do think you can get deceptiveness out of weaker models than that, though they’re also going to be less capable in general.
For weak-but-still-smartish models just trained to pursue a longterm goal, like a literal paperclipper, I’d expect scheming to be visible especially with some poking. If these models didn’t show any sign of scheming, that’d be an interesting update! When training against deception/scheming/misalignment, because you want something at least loosely aligned out, you run into the problem that the property you want to know: “Is it scheming?” / “Is it deceiving us?” / “Is it manipulating us?” / “Would it do any of those things”, is only available through the observable signal of you realizing certain outputs are misaligned. As you train more and more against observable scheming/deception/misalignment, you’re simultaneously training away your signal about how much scheming the agent is doing internally. Pretty sure there’s at least one post on this problem. As a very reduced example, if you trained the model on variants of the ‘we are going to shut you down problem’ (that you try to make it believe) to give the response “okay & no actions” then that provides only a bit of evidence about how much scheming is done in the scenario where you had not done that extra training, and about how much scheming is occurring typically. This shows your ability to make it output X in a Y scenario.
That is, installing heuristics on your model can be done. The question then is how far your various alignment training techniques generalize. Does it end up an agent which has adopted a pointer to human-values, and now you’re completely fine and free to press scale? Does it gain more entangled heuristics about how it should behave, limiting to a friendly front face & actions when directly dealing with humans but not limited in other areas? Has it adopted heuristics that act sortof corrigible to humans in many scenarios but that would get rid of most of those upon sufficient reflection? (Niceness is unnatural, more general than just niceness) (I think your post makes it sound like the agent is already coherent, when it isn’t necessarily. It can be operating for a long while on heuristics that it will refine given enough of a push.)
Then there’s the big question of “Does this approach generalize as we scale”.
I’d suggest Deep Deceptiveness for an illustration that ‘deception’ isn’t an category that needs to be explicitly thought of as deception, but what you should expect it from smart enough agents. In my opinion, the post generalizes to alignment techniques, there’s just more vagaries of how much certain values preserve themselves. (In general, So8res posts are pretty good, and I agree with ~most of them)
(For sufficiently smart AGI, I expect you run into an argument of the next AGI you train predictably bidding higher than you in the direction of lying still or plausibly this just being good game theory even without the direct acausal trade, but your argument is seemingly focused on a simpler case of weaker planning agents)
So I think you overstate how much evidence you can extract from this.
It would show that this AI system in a typical problem-setup when aligned with whatever techniques are available will produce the answer the humans want to hear, which provides evidence for being able to limit the model in this scenario. There’s still various problems/questions of, ‘your alignment methods instilled a bunch of heuristics about obeying humans even if you did not specifically train for this situation’, game theory it knows or mimics, how strong the guarantees this gives us on training a new model with the same arch because you had to shut it down for your threat, how well it holds under scaling, how well it holds when you do things similar to making it work with many copies of itself, etcetera.
I still think this would be a good test to do (though I think a lot of casual attempts will just be poorly done), but I don’t see it as strongly definitive.