A is sufficiently powerful to select M which contains the complex part of B. It seems rather implausible that an algorithm of the same power cannot select B.
A’s weights do not contain the complex part of B—deception is an inference-time phenomenon. It’s very possible for complex instrumental goals to be derived from a simple structure such that a search process is capable of finding that simple structure that yields those complex instrumental goals without being able to find a model with those complex instrumental goals hard-coded as terminal goals.
Hmm, sorry, I’m not following. What exactly do you mean by “inference-time” and “derived”? By assumption, when you run A on some sequence it effectively simulates M which runs the complex core of B. So, A trained on that sequence effectively contains the complex core of B as a subroutine.
A’s structure can just be “think up good strategies for achieving X, then do those,” with no explicit subroutine that you can find anywhere in A’s weights that you can copy over to B.
IIUC, you’re saying something like: suppose trained-A computes the source code of the complex core of B and then runs it. But then, define B′ as: compute the source code of the complex core of B (in the same way A does it) and use it to implement B. B′ is equivalent to B and has about the same complexity as trained-A.
Or, from a slightly different angle: if “think up good strategies for achieving X” is powerful enough to come up with M, then “think up good strategies for achieving [reward of the type I defined earlier]” is powerful enough to come up with B.
I think that “think up good strategies for achieving [reward of the type I defined earlier]” is likely to be much, much more complex (making it much more difficult to achieve with a local search process) than an arbitrary goal X for most sorts of rewards that we would actually be happy with AIs achieving.
Why? It seems like M would have all the knowledge required for achieving good rewards of the good type, so simulating M should not be more difficult than achieving good rewards of the good type.
The reward function I was sketching earlier is not complex. Moreover, if you can encode M then you can encode B, no reason why the latter should have much greater complexity. If you can’t encode M but can only encode something that producesM, then by the same token you can encode something that produces B (I don’t think that’s even a meaningful distinction tbh). I think it would help if you could construct a simple mathematical toy model of your reasoning here?
Here’s a simple toy model. Suppose you have two agents that internally compute their actions as follows (perhaps with actual argmax replaced with some smarter search algorithm, but still basically structured as below):
and the problem becomes that both Mdeceptive and Maligned will produce behavior that looks aligned on the training distribution, but Ualigned has to be much more complex. To see this, note that essentially any Udeceptive will yield good training performance because the model will choose to act deceptively during training, whereas if you want to get good training performance without deception, then Ualigned has to actually encode the full objective, which is likely to make it quite complicated.
I understand how this model explains why agents become unaligned under distributional shift. That’s something I never disputed. However, I don’t understand how this model applies to my proposal. In my proposal, there is no distributional shift, because (thanks to the realizability assumption) the real environment is sampled from the same prior that is used for training with C. The model can’t choose to act deceptively during training, because it can’t distinguish between training and deployment. Moreover, the objective I described is not complicated.
Yeah, that’s a fair objection—my response to that is just that I think that preventing a model from being able to distinguish training and deployment is likely to be impossible for anything competitive.
Okay, but why? I think that the reason you have this intuition is, the realizability assumption is false. But then you should concede that the main weakness of the OP is the realizability assumption rather than the difference between deep learning and Bayesianism.
Perhaps I just totally don’t understand what you mean by realizability, but I fail to see how realizability is relevant here. As I understand it, realizability just says that the true model has some non-zero prior probability—but that doesn’t matter (at least for the MAP, which I think is a better model than the full posterior for how SGD actually works) as long as there’s some deceptive model with greater prior probability that’s indistinguishable on the training distribution, as in my simple toy model from earlier.
When talking about uniform (worst-case) bounds, realizability just means the true environment is in the hypothesis class, but in a Bayesian setting (like in the OP) it means that our bounds scale with the probability of the true environment in the prior. Essentially, it means we can pretend the true environment was sampled from the prior. So, if (by design) training works by sampling environments from the prior, and (by realizability) deployment also consists of sampling an environment from the same prior, training and deployment are indistinguishable.
Sure—by that definition of realizability, I agree that’s where the difficulty is. Though I would seriously question the practical applicability of such an assumption.
A’s weights do not contain the complex part of B—deception is an inference-time phenomenon. It’s very possible for complex instrumental goals to be derived from a simple structure such that a search process is capable of finding that simple structure that yields those complex instrumental goals without being able to find a model with those complex instrumental goals hard-coded as terminal goals.
Hmm, sorry, I’m not following. What exactly do you mean by “inference-time” and “derived”? By assumption, when you run A on some sequence it effectively simulates M which runs the complex core of B. So, A trained on that sequence effectively contains the complex core of B as a subroutine.
A’s structure can just be “think up good strategies for achieving X, then do those,” with no explicit subroutine that you can find anywhere in A’s weights that you can copy over to B.
IIUC, you’re saying something like: suppose trained-A computes the source code of the complex core of B and then runs it. But then, define B′ as: compute the source code of the complex core of B (in the same way A does it) and use it to implement B. B′ is equivalent to B and has about the same complexity as trained-A.
Or, from a slightly different angle: if “think up good strategies for achieving X” is powerful enough to come up with M, then “think up good strategies for achieving [reward of the type I defined earlier]” is powerful enough to come up with B.
I think that “think up good strategies for achieving [reward of the type I defined earlier]” is likely to be much, much more complex (making it much more difficult to achieve with a local search process) than an arbitrary goal X for most sorts of rewards that we would actually be happy with AIs achieving.
Why? It seems like M would have all the knowledge required for achieving good rewards of the good type, so simulating M should not be more difficult than achieving good rewards of the good type.
It’s not that simulating M is difficult, but that encoding for some complex goal is difficult, whereas encoding for a random, simple goal is easy.
The reward function I was sketching earlier is not complex. Moreover, if you can encode M then you can encode B, no reason why the latter should have much greater complexity. If you can’t encode M but can only encode something that produces M, then by the same token you can encode something that produces B (I don’t think that’s even a meaningful distinction tbh). I think it would help if you could construct a simple mathematical toy model of your reasoning here?
Here’s a simple toy model. Suppose you have two agents that internally compute their actions as follows (perhaps with actual argmax replaced with some smarter search algorithm, but still basically structured as below):
Mdeceptive(x)=argmaxaE[∑iUdeceptive(si) | a]Maligned(x)=argmaxaE[∑iUaligned(si) | a]
Then, comparing the K-complexity of the two models, we get
K(Maligned)−K(Mdeceptive)≈K(Ualigned)−K(Udeceptive)
and the problem becomes that both Mdeceptive and Maligned will produce behavior that looks aligned on the training distribution, but Ualigned has to be much more complex. To see this, note that essentially any Udeceptive will yield good training performance because the model will choose to act deceptively during training, whereas if you want to get good training performance without deception, then Ualigned has to actually encode the full objective, which is likely to make it quite complicated.
I understand how this model explains why agents become unaligned under distributional shift. That’s something I never disputed. However, I don’t understand how this model applies to my proposal. In my proposal, there is no distributional shift, because (thanks to the realizability assumption) the real environment is sampled from the same prior that is used for training with C. The model can’t choose to act deceptively during training, because it can’t distinguish between training and deployment. Moreover, the objective I described is not complicated.
Yeah, that’s a fair objection—my response to that is just that I think that preventing a model from being able to distinguish training and deployment is likely to be impossible for anything competitive.
Okay, but why? I think that the reason you have this intuition is, the realizability assumption is false. But then you should concede that the main weakness of the OP is the realizability assumption rather than the difference between deep learning and Bayesianism.
Perhaps I just totally don’t understand what you mean by realizability, but I fail to see how realizability is relevant here. As I understand it, realizability just says that the true model has some non-zero prior probability—but that doesn’t matter (at least for the MAP, which I think is a better model than the full posterior for how SGD actually works) as long as there’s some deceptive model with greater prior probability that’s indistinguishable on the training distribution, as in my simple toy model from earlier.
When talking about uniform (worst-case) bounds, realizability just means the true environment is in the hypothesis class, but in a Bayesian setting (like in the OP) it means that our bounds scale with the probability of the true environment in the prior. Essentially, it means we can pretend the true environment was sampled from the prior. So, if (by design) training works by sampling environments from the prior, and (by realizability) deployment also consists of sampling an environment from the same prior, training and deployment are indistinguishable.
Sure—by that definition of realizability, I agree that’s where the difficulty is. Though I would seriously question the practical applicability of such an assumption.