Joe also discusses simplicity arguments for scheming, which suppose that schemers may be “simpler” than non-schemers, and therefore more likely to be produced by SGD.
I’m not familiar with the details of Joe’s arguments, but to me the strongest argument from simplicity is not that schemers are simpler than non-schemers, it’s that scheming itself is conceptually simple and instrumentally useful. So any system capable of doing useful and general cognitive work will necessarily have to at least be capable of scheming.
We will address this question in greater detail in a future post. However, we believe that current evidence about inductive biases points against scheming for a variety of reasons. Very briefly:
Modern deep neural networks are ensembles of shallower networks. Scheming seems to involve chains of if-then reasoning which would be hard to implement in shallow networks.
Networks have a bias toward low frequency functions— that is, functions whose outputs change little as their inputs change. But scheming requires the AI to change its behavior dramatically (executing a treacherous turn) in response to subtle cues indicating it is not in a sandbox, and could successfully escape.
There’s no plausible account of inductive biases that does support scheming. The current literature on scheming appears to have been inspired by Paul Christiano’s speculations about malign intelligences in Solomonoff induction, a purely theoretical model of probabilistic reasoning which is provably unrealizable in the real world.[16] Neural nets look nothing like this.
In contrast, points of comparison that are more relevant to neural network training, such as isolated brain cortices, don’t scheme. Your linguistic cortex is not “instrumentally pretending to model linguistic data in pursuit of some hidden objective.”
Also, don’t these counterpoints prove too much? If networks trained via SGD can’t learn scheming, why should we expect models trained via SGD to be capable of learning or using any high-level concepts, even desirable ones?
These bullets seem like plausible reasons for why you probably won’t get scheming within a single forward pass of a current-paradigm DL model, but are already inapplicable to the real-world AI systems in which these models are deployed.
LLM-based systems are already capable of long chains of if-then reasoning, and can change their behavior dramatically given a different initial prompt, often in surprising ways.
If the most relevant point of comparison to NN training is an isolated brain cortex, then that’s just saying that NN training will never be useful in isolation, since an isolated brain cortex can’t do much (good or bad) unless it is actually hooked up to a body, or at least the rest of a brain.
It’s not that they can’t learn scheming. A sufficiently wide network can learn any continuous function. It’s that they’re biased strongly against scheming, and they’re not going to learn it unless the training data primarily consists of examples of humans scheming against one another, or something.
These bullets seem like plausible reasons for why you probably won’t get scheming within a single forward pass of a current-paradigm DL model, but are already inapplicable to the real-world AI systems in which these models are deployed.
Why does chaining forward passes together make any difference? Each forward pass has been optimized to mimic patterns in the training data. Nothing more, nothing less. It’ll scheme in context X iff scheming behavior is likely in context X in the training corpus.
It’s that they’re biased strongly against scheming, and they’re not going to learn it unless the training data primarily consists of examples of humans scheming against one another, or something.
I’m saying if they’re biased strongly against scheming, that implies they are also biased against usefulness to some degree.
As a concrete example, it is demonstrably much easier to create a fake blood testing company and scam investors and patients for $billions than it is to actually revolutionize blood testing. I claim that there is something like a core of general intelligence required to execute on things like the latter, which necessarily implies possession of most or all of the capabilities needed to pull off the former.
This is just an equivocation, though. Of course you could train an AI to “scheme” against people in the sense of selling a fake blood testing service. That doesn’t mean that by default you should expect AIs to spontaneously start scheming against you, and in ways you can’t easily notice.
I’m not familiar with the details of Joe’s arguments, but to me the strongest argument from simplicity is not that schemers are simpler than non-schemers, it’s that scheming itself is conceptually simple and instrumentally useful. So any system capable of doing useful and general cognitive work will necessarily have to at least be capable of scheming.
Also, don’t these counterpoints prove too much? If networks trained via SGD can’t learn scheming, why should we expect models trained via SGD to be capable of learning or using any high-level concepts, even desirable ones?
These bullets seem like plausible reasons for why you probably won’t get scheming within a single forward pass of a current-paradigm DL model, but are already inapplicable to the real-world AI systems in which these models are deployed.
LLM-based systems are already capable of long chains of if-then reasoning, and can change their behavior dramatically given a different initial prompt, often in surprising ways.
If the most relevant point of comparison to NN training is an isolated brain cortex, then that’s just saying that NN training will never be useful in isolation, since an isolated brain cortex can’t do much (good or bad) unless it is actually hooked up to a body, or at least the rest of a brain.
It’s not that they can’t learn scheming. A sufficiently wide network can learn any continuous function. It’s that they’re biased strongly against scheming, and they’re not going to learn it unless the training data primarily consists of examples of humans scheming against one another, or something.
Why does chaining forward passes together make any difference? Each forward pass has been optimized to mimic patterns in the training data. Nothing more, nothing less. It’ll scheme in context X iff scheming behavior is likely in context X in the training corpus.
I’m saying if they’re biased strongly against scheming, that implies they are also biased against usefulness to some degree.
As a concrete example, it is demonstrably much easier to create a fake blood testing company and scam investors and patients for $billions than it is to actually revolutionize blood testing. I claim that there is something like a core of general intelligence required to execute on things like the latter, which necessarily implies possession of most or all of the capabilities needed to pull off the former.
This is just an equivocation, though. Of course you could train an AI to “scheme” against people in the sense of selling a fake blood testing service. That doesn’t mean that by default you should expect AIs to spontaneously start scheming against you, and in ways you can’t easily notice.